Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 9 (September 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) In order to understand the logical architecture of living systems, von Neumann introduced the idea [...] Read more.
View options order results:
result details:
Displaying articles 1-73
Export citation of selected articles as:
Open AccessEditorial Nonequilibrium Phenomena in Confined Systems
Entropy 2017, 19(9), 507; https://doi.org/10.3390/e19090507
Received: 19 September 2017 / Revised: 19 September 2017 / Accepted: 19 September 2017 / Published: 20 September 2017
Cited by 1 | PDF Full-text (149 KB) | HTML Full-text | XML Full-text
Abstract
Confined systems exhibit a large variety of nonequilibrium phenomena. In this special issue, we have collected a limited number of papers that were presented during the XXV Sitges Conference on Statistical Mechanics, devoted to “Nonequilibrium phenomena in confined systems”.[...] Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Open AccessArticle An Efficient Advantage Distillation Scheme for Bidirectional Secret-Key Agreement
Entropy 2017, 19(9), 505; https://doi.org/10.3390/e19090505
Received: 30 July 2017 / Revised: 30 August 2017 / Accepted: 14 September 2017 / Published: 18 September 2017
PDF Full-text (579 KB) | HTML Full-text | XML Full-text
Abstract
The classical secret-key agreement (SKA) scheme includes three phases: (a) advantage distillation (AD), (b) reconciliation, and (c) privacy amplification. Define the transmission rate as the ratio between the number of raw key bits obtained by the AD phase and the number of transmitted
[...] Read more.
The classical secret-key agreement (SKA) scheme includes three phases: (a) advantage distillation (AD), (b) reconciliation, and (c) privacy amplification. Define the transmission rate as the ratio between the number of raw key bits obtained by the AD phase and the number of transmitted bits in the AD. The unidirectional SKA, whose transmission rate is 0 . 5, can be realized by using the original two-way wiretap channel as the AD phase. In this paper, we establish an efficient bidirectional SKA whose transmission rate is nearly 1 by modifying the two-way wiretap channel and using the modified two-way wiretap channel as the AD phase. The bidirectional SKA can be extended to multiple rounds of SKA with the same performance and transmission rate. For multiple rounds of bidirectional SKA, we have provided the bit error rate performance of the main channel and eavesdropper’s channel and the secret-key capacity. It is shown that the bit error rate (BER) of the main channel was lower than the eavesdropper’s channel and we prove that the transmission rate was nearly 1 when the number of rounds was large. Moreover, the secret-key capacity C s was from 0 . 04 to 0 . 1 as the error probability of channel was from 0 . 01 to 0 . 15 in binary symmetric channel (BSC). The secret-key capacity was close to 0 . 3 as the signal-to-noise ratio increased in the additive white Gaussian noise (AWGN) channel. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle Second Law Analysis for Couple Stress Fluid Flow through a Porous Medium with Constant Heat Flux
Entropy 2017, 19(9), 498; https://doi.org/10.3390/e19090498
Received: 16 August 2017 / Revised: 4 September 2017 / Accepted: 11 September 2017 / Published: 18 September 2017
Cited by 1 | PDF Full-text (1758 KB) | HTML Full-text | XML Full-text
Abstract
In the present work, entropy generation in the flow and heat transfer of couple stress fluid through an infinite inclined channel embedded in a saturated porous medium is presented. Due to the channel geometry, the asymmetrical slip conditions are imposed on the channel
[...] Read more.
In the present work, entropy generation in the flow and heat transfer of couple stress fluid through an infinite inclined channel embedded in a saturated porous medium is presented. Due to the channel geometry, the asymmetrical slip conditions are imposed on the channel walls. The upper wall of the channel is subjected to a constant heat flux while the lower wall is insulated. The equations governing the fluid flow are formulated, non-dimensionalized and solved by using the Adomian decomposition method. The Adomian series solutions for the velocity and temperature fields are then used to compute the entropy generation rate and inherent heat irreversibility in the flow domain. The effects of various fluid parameters are presented graphically and discussed extensively. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Traction Inverter Open Switch Fault Diagnosis Based on Choi–Williams Distribution Spectral Kurtosis and Wavelet-Packet Energy Shannon Entropy
Entropy 2017, 19(9), 504; https://doi.org/10.3390/e19090504
Received: 4 September 2017 / Revised: 10 September 2017 / Accepted: 11 September 2017 / Published: 16 September 2017
PDF Full-text (5347 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a new approach for fault detection and location of open switch faults in the closed-loop inverter fed vector controlled drives of Electric Multiple Units is proposed. Spectral kurtosis (SK) based on Choi–Williams distribution (CWD) as a statistical tool can effectively
[...] Read more.
In this paper, a new approach for fault detection and location of open switch faults in the closed-loop inverter fed vector controlled drives of Electric Multiple Units is proposed. Spectral kurtosis (SK) based on Choi–Williams distribution (CWD) as a statistical tool can effectively indicate the presence of transients and locations in the frequency domain. Wavelet-packet energy Shannon entropy (WPESE) is appropriate for the transient changes detection of complex non-linear and non-stationary signals. Based on the analyses of currents in normal and fault conditions, SK based on CWD and WPESE are combined with the DC component method. SK based on CWD and WPESE are used for the fault detection, and the DC component method is used for the fault localization. This approach can diagnose the specific locations of faulty Insulated Gate Bipolar Transistors (IGBTs) with high accuracy, and it requires no additional devices. Experiments on the RT-LAB platform are carried out and the experimental results verify the feasibility and effectiveness of the diagnosis method. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Attribute Value Weighted Average of One-Dependence Estimators
Entropy 2017, 19(9), 501; https://doi.org/10.3390/e19090501
Received: 8 July 2017 / Revised: 16 August 2017 / Accepted: 11 September 2017 / Published: 16 September 2017
Cited by 2 | PDF Full-text (435 KB) | HTML Full-text | XML Full-text
Abstract
Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, semi-naive Bayesian classifiers which utilize one-dependence estimators (ODEs) have been shown to be able to approximate the ground-truth attribute dependencies; meanwhile, the probability estimation in ODEs is
[...] Read more.
Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, semi-naive Bayesian classifiers which utilize one-dependence estimators (ODEs) have been shown to be able to approximate the ground-truth attribute dependencies; meanwhile, the probability estimation in ODEs is effective, thus leading to excellent performance. In previous studies, ODEs were exploited directly in a simple way. For example, averaged one-dependence estimators (AODE) weaken the attribute independence assumption by directly averaging all of a constrained class of classifiers. However, all one-dependence estimators in AODE have the same weights and are treated equally. In this study, we propose a new paradigm based on a simple, efficient, and effective attribute value weighting approach, called attribute value weighted average of one-dependence estimators (AVWAODE). AVWAODE assigns discriminative weights to different ODEs by computing the correlation between the different root attribute value and the class. Our approach uses two different attribute value weighting measures: the Kullback–Leibler (KL) measure and the information gain (IG) measure, and thus two different versions are created, which are simply denoted by AVWAODE-KL and AVWAODE-IG, respectively. We experimentally tested them using a collection of 36 University of California at Irvine (UCI) datasets and found that they both achieved better performance than some other state-of-the-art Bayesian classifiers used for comparison. Full article
Figures

Figure 1

Open AccessArticle Modeling NDVI Using Joint Entropy Method Considering Hydro-Meteorological Driving Factors in the Middle Reaches of Hei River Basin
Entropy 2017, 19(9), 502; https://doi.org/10.3390/e19090502
Received: 4 August 2017 / Revised: 8 September 2017 / Accepted: 13 September 2017 / Published: 15 September 2017
Cited by 3 | PDF Full-text (4562 KB) | HTML Full-text | XML Full-text
Abstract
Terrestrial vegetation dynamics are closely influenced by both hydrological process and climate change. This study investigated the relationships between vegetation pattern and hydro-meteorological elements. The joint entropy method was employed to evaluate the dependence between the normalized difference vegetation index (NDVI) and coupled
[...] Read more.
Terrestrial vegetation dynamics are closely influenced by both hydrological process and climate change. This study investigated the relationships between vegetation pattern and hydro-meteorological elements. The joint entropy method was employed to evaluate the dependence between the normalized difference vegetation index (NDVI) and coupled variables in the middle reaches of the Hei River basin. Based on the spatial distribution of mutual information, the whole study area was divided into five sub-regions. In each sub-region, nested statistical models were applied to model the NDVI on the grid and regional scales, respectively. Results showed that the annual average NDVI increased at a rate of 0.005/a over the past 11 years. In the desert regions, the NDVI increased significantly with an increase in precipitation and temperature, and a high accuracy of retrieving NDVI model was obtained by coupling precipitation and temperature, especially in sub-region I. In the oasis regions, groundwater was also an important factor driving vegetation growth, and the rise of the groundwater level contributed to the growth of vegetation. However, the relationship was weaker in artificial oasis regions (sub-region III and sub-region V) due to the influence of human activities such as irrigation. The overall correlation coefficient between the observed NDVI and modeled NDVI was observed to be 0.97. The outcomes of this study are suitable for ecosystem monitoring, especially in the realm of climate change. Further studies are necessary and should consider more factors, such as runoff and irrigation. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Recall Performance for Content-Addressable Memory Using Adiabatic Quantum Optimization
Entropy 2017, 19(9), 500; https://doi.org/10.3390/e19090500
Received: 24 May 2017 / Revised: 10 September 2017 / Accepted: 11 September 2017 / Published: 15 September 2017
PDF Full-text (1152 KB) | HTML Full-text | XML Full-text
Abstract
A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps
[...] Read more.
A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps the recurrent neural network to a commercially available quantum processing unit by taking advantage of the common underlying Ising spin model. We then assess the accuracy of the quantum processor to store key-value associations by quantifying recall performance against an ensemble of problem sets. We observe that different learning rules from the neural network community influence recall accuracy but performance appears to be limited by potential noise in the processor. The strong connection established between quantum processors and neural network problems supports the growing intersection of these two ideas. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Open AccessArticle Thermodynamics of Small Magnetic Particles
Entropy 2017, 19(9), 499; https://doi.org/10.3390/e19090499
Received: 2 August 2017 / Revised: 1 September 2017 / Accepted: 13 September 2017 / Published: 15 September 2017
Cited by 2 | PDF Full-text (568 KB) | HTML Full-text | XML Full-text
Abstract
In the present paper, we discuss the interpretation of some of the results of the thermodynamics in the case of very small systems. Most of the usual statistical physics is done for systems with a huge number of elements in what is called
[...] Read more.
In the present paper, we discuss the interpretation of some of the results of the thermodynamics in the case of very small systems. Most of the usual statistical physics is done for systems with a huge number of elements in what is called the thermodynamic limit, but not all of the approximations done for those conditions can be extended to all properties in the case of objects with less than a thousand elements. The starting point is the Ising model in two dimensions (2D) where an analytic solution exits, which allows validating the numerical techniques used in the present article. From there on, we introduce several variations bearing in mind the small systems such as the nanoscopic or even subnanoscopic particles, which are nowadays produced for several applications. Magnetization is the main property investigated aimed for two singular possible devices. The size of the systems (number of magnetic sites) is decreased so as to appreciate the departure from the results valid in the thermodynamic limit; periodic boundary conditions are eliminated to approach the reality of small particles; 1D, 2D and 3D systems are examined to appreciate the differences established by dimensionality is this small world; upon diluting the lattices, the effect of coordination number (bonding) is also explored; since the 2D Ising model is equivalent to the clock model with q = 2 degrees of freedom, we combine previous results with the supplementary degrees of freedom coming from the variation of q up to q = 20 . Most of the previous results are numeric; however, for the case of a very small system, we obtain the exact partition function to compare with the conclusions coming from our numerical results. Conclusions can be summarized in the following way: the laws of thermodynamics remain the same, but the interpretation of the results, averages and numerical treatments need special care for systems with less than about a thousand constituents, and this might need to be adapted for different properties or devices. Full article
Figures

Figure 1

Open AccessArticle Sum Capacity for Single-Cell Multi-User Systems with M-Ary Inputs
Entropy 2017, 19(9), 497; https://doi.org/10.3390/e19090497
Received: 30 June 2017 / Revised: 30 August 2017 / Accepted: 12 September 2017 / Published: 15 September 2017
PDF Full-text (1699 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the sum capacity of a single-cell multi-user system under the constraint that the transmitted signal is adopted from M-ary two-dimensional constellation with equal probability for both uplink, i.e., multiple access channel (MAC), and downlink, i.e., broadcast channel (BC) scenarios.
[...] Read more.
This paper investigates the sum capacity of a single-cell multi-user system under the constraint that the transmitted signal is adopted from M-ary two-dimensional constellation with equal probability for both uplink, i.e., multiple access channel (MAC), and downlink, i.e., broadcast channel (BC) scenarios. Based on the successive interference cancellation (SIC) and the entropy power Gaussian approximation, it is shown that both the multi-user MAC and BC can be approximated to a bank of parallel channels with the channel gains being modified by an extra attenuate factor that equals to the negative exponential of the capacity of interfering users. With this result, the capacity of MAC and BC with arbitrary number of users and arbitrary constellations can be easily calculated which in sharp contrast with using traditional Monte Carlo simulation that the calculating amount increases exponentially with the increase of the number of users. Further, the sum capacity of multi-user under different power allocation strategies including equal power allocation, equal capacity power allocation and maximum capacity power allocation is also investigated. For the equal capacity power allocation, a recursive relation for the solution of power allocation is derived. For the maximum capacity power allocation, the necessary condition for optimal power allocation is obtained and an optimal algorithm for the power allocation optimization problem is proposed based on the necessary condition. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Figure 1

Open AccessArticle Entropic Data Envelopment Analysis: A Diversification Approach for Portfolio Optimization
Entropy 2017, 19(9), 352; https://doi.org/10.3390/e19090352
Received: 16 May 2017 / Revised: 7 July 2017 / Accepted: 7 July 2017 / Published: 15 September 2017
PDF Full-text (453 KB) | HTML Full-text | XML Full-text
Abstract
Recently, different methods have been proposed for portfolio optimization and decision making on investment issues. This article aims to present a new method for portfolio formation based on Data Envelopment Analysis (DEA) and Entropy function. This new portfolio optimization method applies DEA in
[...] Read more.
Recently, different methods have been proposed for portfolio optimization and decision making on investment issues. This article aims to present a new method for portfolio formation based on Data Envelopment Analysis (DEA) and Entropy function. This new portfolio optimization method applies DEA in association with a model resulting from the insertion of the Entropy function directly into the optimization procedure. First, the DEA model was applied to perform a pre-selection of the assets. Then, assets given as efficient were submitted to the proposed model, resulting from the insertion of the Entropy function into the simplified Sharpe’s portfolio optimization model. As a result, an improved asset participation was provided in the portfolio. In the DEA model, several variables were evaluated and a low value of beta was achieved, guaranteeing greater robustness to the portfolio. Entropy function has provided not only greater diversity but also more feasible asset allocation. Additionally, the proposed method has obtained a better portfolio performance, measured by the Sharpe Ratio, in relation to the comparative methods. Full article
Figures

Figure 1

Open AccessArticle Log Likelihood Spectral Distance, Entropy Rate Power, and Mutual Information with Applications to Speech Coding
Entropy 2017, 19(9), 496; https://doi.org/10.3390/e19090496
Received: 22 August 2017 / Revised: 9 September 2017 / Accepted: 10 September 2017 / Published: 14 September 2017
PDF Full-text (1194 KB) | HTML Full-text | XML Full-text
Abstract
We provide a new derivation of the log likelihood spectral distance measure for signal processing using the logarithm of the ratio of entropy rate powers. Using this interpretation, we show that the log likelihood ratio is equivalent to the difference of two differential
[...] Read more.
We provide a new derivation of the log likelihood spectral distance measure for signal processing using the logarithm of the ratio of entropy rate powers. Using this interpretation, we show that the log likelihood ratio is equivalent to the difference of two differential entropies, and further that it can be written as the difference of two mutual informations. These latter two expressions allow the analysis of signals via the log likelihood ratio to be extended beyond spectral matching to the study of their statistical quantities of differential entropy and mutual information. Examples from speech coding are presented to illustrate the utility of these new results. These new expressions allow the log likelihood ratio to be of interest in applications beyond those of just spectral matching for speech. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle On the Capacity and the Optimal Sum-Rate of a Class of Dual-Band Interference Channels
Entropy 2017, 19(9), 495; https://doi.org/10.3390/e19090495
Received: 29 June 2017 / Revised: 8 September 2017 / Accepted: 11 September 2017 / Published: 14 September 2017
PDF Full-text (632 KB) | HTML Full-text | XML Full-text
Abstract
We study a class of two-transmitter two-receiver dual-band Gaussian interference channels (GIC) which operates over the conventional microwave and the unconventional millimeter-wave (mm-wave) bands. This study is motivated by future 5G networks where additional spectrum in the mm-wave band complements transmission in the
[...] Read more.
We study a class of two-transmitter two-receiver dual-band Gaussian interference channels (GIC) which operates over the conventional microwave and the unconventional millimeter-wave (mm-wave) bands. This study is motivated by future 5G networks where additional spectrum in the mm-wave band complements transmission in the incumbent microwave band. The mm-wave band has a key modeling feature: due to severe path loss and relatively small wavelength, a transmitter must employ highly directional antenna arrays to reach its desired receiver. This feature causes the mm-wave channels to become highly directional, and thus can be used by a transmitter to transmit to its designated receiver or the other receiver. We consider two classes of such channels, where the underlying GIC in the microwave band has weak and strong interference, and obtain sufficient channel conditions under which the capacity is characterized. Moreover, we assess the impact of the additional mm-wave band spectrum on the performance, by characterizing the transmit power allocation for the direct and cross channels that maximizes the sum-rate of this dual-band channel. The solution reveals conditions under which different power allocations, such as allocating the power budget only to direct or only to cross channels, or sharing it among them, becomes optimal. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Graphical abstract

Open AccessFeature PaperArticle Quantifying Information Modification in Developing Neural Networks via Partial Information Decomposition
Entropy 2017, 19(9), 494; https://doi.org/10.3390/e19090494
Received: 13 July 2017 / Revised: 12 September 2017 / Accepted: 12 September 2017 / Published: 14 September 2017
Cited by 2 | PDF Full-text (4808 KB) | HTML Full-text | XML Full-text
Abstract
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information
[...] Read more.
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information transfer and storage, not for modification. This has changed with the extension of Shannon information theory via the decomposition of the mutual information between inputs to and the output of a process into unique, shared and synergistic contributions from the inputs, called a partial information decomposition (PID). The synergistic contribution in particular has been identified as the basis for a definition of information modification. We here review the requirements for a functional definition of information modification in neuroscience, and apply a recently proposed measure of information modification to investigate the developmental trajectory of information modification in a culture of neurons vitro, using partial information decomposition. We found that modification rose with maturation, but ultimately collapsed when redundant information among neurons took over. This indicates that this particular developing neural system initially developed intricate processing capabilities, but ultimately displayed information processing that was highly similar across neurons, possibly due to a lack of external inputs. We close by pointing out the enormous promise PID and the analysis of information modification hold for the understanding of neural systems. Full article
Figures

Figure 1

Open AccessFeature PaperArticle On Generalized Stam Inequalities and Fisher–Rényi Complexity Measures
Entropy 2017, 19(9), 493; https://doi.org/10.3390/e19090493
Received: 21 August 2017 / Revised: 8 September 2017 / Accepted: 12 September 2017 / Published: 14 September 2017
PDF Full-text (489 KB) | HTML Full-text | XML Full-text
Abstract
Information-theoretic inequalities play a fundamental role in numerous scientific and technological areas (e.g., estimation and communication theories, signal and information processing, quantum physics, …) as they generally express the impossibility to have a complete description of a system via a finite number of
[...] Read more.
Information-theoretic inequalities play a fundamental role in numerous scientific and technological areas (e.g., estimation and communication theories, signal and information processing, quantum physics, …) as they generally express the impossibility to have a complete description of a system via a finite number of information measures. In particular, they gave rise to the design of various quantifiers (statistical complexity measures) of the internal complexity of a (quantum) system. In this paper, we introduce a three-parametric Fisher–Rényi complexity, named ( p , β , λ ) -Fisher–Rényi complexity, based on both a two-parametic extension of the Fisher information and the Rényi entropies of a probability density function ρ characteristic of the system. This complexity measure quantifies the combined balance of the spreading and the gradient contents of ρ , and has the three main properties of a statistical complexity: the invariance under translation and scaling transformations, and a universal bounding from below. The latter is proved by generalizing the Stam inequality, which lowerbounds the product of the Shannon entropy power and the Fisher information of a probability density function. An extension of this inequality was already proposed by Bercher and Lutwak, a particular case of the general one, where the three parameters are linked, allowing to determine the sharp lower bound and the associated probability density with minimal complexity. Using the notion of differential-escort deformation, we are able to determine the sharp bound of the complexity measure even when the three parameters are decoupled (in a certain range). We determine as well the distribution that saturates the inequality: the ( p , β , λ ) -Gaussian distribution, which involves an inverse incomplete beta function. Finally, the complexity measure is calculated for various quantum-mechanical states of the harmonic and hydrogenic systems, which are the two main prototypes of physical systems subject to a central potential. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Open AccessArticle Eigentimes and Very Slow Processes
Entropy 2017, 19(9), 492; https://doi.org/10.3390/e19090492
Received: 1 July 2017 / Revised: 28 August 2017 / Accepted: 8 September 2017 / Published: 14 September 2017
Cited by 1 | PDF Full-text (1712 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the importance of the time and length scales at play in our descriptions of Nature. What can we observe at the atomic scale, at the laboratory (human) scale, and at the galactic scale? Which variables make sense? For every scale we
[...] Read more.
We investigate the importance of the time and length scales at play in our descriptions of Nature. What can we observe at the atomic scale, at the laboratory (human) scale, and at the galactic scale? Which variables make sense? For every scale we wish to understand we need a set of variables which are linked through closed equations, i.e., everything can meaningfully be described in terms of those variables without the need to investigate other scales. Examples from physics, chemistry, and evolution are presented. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Open AccessArticle Semantic Security with Practical Transmission Schemes over Fading Wiretap Channels
Entropy 2017, 19(9), 491; https://doi.org/10.3390/e19090491
Received: 17 July 2017 / Revised: 8 September 2017 / Accepted: 9 September 2017 / Published: 13 September 2017
Cited by 1 | PDF Full-text (560 KB) | HTML Full-text | XML Full-text
Abstract
We propose and assess an on–off protocol for communication over wireless wiretap channels with security at the physical layer. By taking advantage of suitable cryptographic primitives, the protocol we propose allows two legitimate parties to exchange confidential messages with some chosen level of
[...] Read more.
We propose and assess an on–off protocol for communication over wireless wiretap channels with security at the physical layer. By taking advantage of suitable cryptographic primitives, the protocol we propose allows two legitimate parties to exchange confidential messages with some chosen level of semantic security against passive eavesdroppers, and without needing either pre-shared secret keys or public keys. The proposed method leverages the noisy and fading nature of the channel and exploits coding and all-or-nothing transforms to achieve the desired level of semantic security. We show that the use of fake packets in place of skipped transmissions during low channel quality periods yields significant advantages in terms of time needed to complete transmission of a secret message. Numerical examples are provided considering coding and modulation schemes included in the WiMax standard, thus showing that the proposed approach is feasible even with existing practical devices. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle Use of Mutual Information and Transfer Entropy to Assess Interaction between Parasympathetic and Sympathetic Activities of Nervous System from HRV
Entropy 2017, 19(9), 489; https://doi.org/10.3390/e19090489
Received: 17 July 2017 / Revised: 9 September 2017 / Accepted: 11 September 2017 / Published: 13 September 2017
Cited by 1 | PDF Full-text (2347 KB) | HTML Full-text | XML Full-text
Abstract
Obstructive sleep apnea (OSA) is a common sleep disorder that often associates with reduced heart rate variability (HRV) indicating autonomic dysfunction. HRV is mainly composed of high frequency components attributed to parasympathetic activity and low frequency components attributed to sympathetic activity. Although, time
[...] Read more.
Obstructive sleep apnea (OSA) is a common sleep disorder that often associates with reduced heart rate variability (HRV) indicating autonomic dysfunction. HRV is mainly composed of high frequency components attributed to parasympathetic activity and low frequency components attributed to sympathetic activity. Although, time domain and frequency domain features of HRV have been used to sleep studies, the complex interaction between nonlinear independent frequency components with OSA is less known. This study included 30 electrocardiogram recordings (20 OSA patient recording and 10 healthy subjects) with apnea or normal label in 1-min segment. All segments were divided into three groups: N-N group (normal segments of normal subjects), P-N group (normal segments of OSA subjects) and P-OSA group (apnea segments of OSA subjects). Frequency domain indices and interaction indices were extracted from segmented RR intervals. Frequency domain indices included nuLF, nuHF, and LF/HF ratio; interaction indices included mutual information (MI) and transfer entropy (TE (H→L) and TE (L→H)). Our results demonstrated that LF/HF ratio was significant higher in P-OSA group than N-N group and P-N group. MI was significantly larger in P-OSA group than P-N group. TE (H→L) and TE (L→H) showed a significant decrease in P-OSA group, compared to P-N group and N-N group. TE (H→L) were significantly negative correlation with LF/HF ratio in P-N group (r = −0.789, p = 0.000) and P-OSA group (r = −0.661, p = 0.002). Our results indicated that MI and TE is powerful tools to evaluate sympathovagal modulation in OSA. Moreover, sympathovagal modulation is more imbalance in OSA patients while suffering from apnea event compared to free event. Full article
(This article belongs to the Special Issue Transfer Entropy II)
Figures

Figure 1

Open AccessFeature PaperArticle Automated Diagnosis of Myocardial Infarction ECG Signals Using Sample Entropy in Flexible Analytic Wavelet Transform Framework
Entropy 2017, 19(9), 488; https://doi.org/10.3390/e19090488
Received: 28 July 2017 / Revised: 8 September 2017 / Accepted: 8 September 2017 / Published: 13 September 2017
Cited by 5 | PDF Full-text (834 KB) | HTML Full-text | XML Full-text
Abstract
Myocardial infarction (MI) is a silent condition that irreversibly damages the heart muscles. It expands rapidly and, if not treated timely, continues to damage the heart muscles. An electrocardiogram (ECG) is generally used by the clinicians to diagnose the MI patients. Manual identification
[...] Read more.
Myocardial infarction (MI) is a silent condition that irreversibly damages the heart muscles. It expands rapidly and, if not treated timely, continues to damage the heart muscles. An electrocardiogram (ECG) is generally used by the clinicians to diagnose the MI patients. Manual identification of the changes introduced by MI is a time-consuming and tedious task, and there is also a possibility of misinterpretation of the changes in the ECG. Therefore, a method for automatic diagnosis of MI using ECG beat with flexible analytic wavelet transform (FAWT) method is proposed in this work. First, the segmentation of ECG signals into beats is performed. Then, FAWT is applied to each ECG beat, which decomposes them into subband signals. Sample entropy (SEnt) is computed from these subband signals and fed to the random forest (RF), J48 decision tree, back propagation neural network (BPNN), and least-squares support vector machine (LS-SVM) classifiers to choose the highest performing one. We have achieved highest classification accuracy of 99.31% using LS-SVM classifier. We have also incorporated Wilcoxon and Bhattacharya ranking methods and observed no improvement in the performance. The proposed automated method can be installed in the intensive care units (ICUs) of hospitals to aid the clinicians in confirming their diagnosis. Full article
Figures

Figure 1

Open AccessArticle Simultaneous Wireless Information and Power Transfer for MIMO Interference Channel Networks Based on Interference Alignment
Entropy 2017, 19(9), 484; https://doi.org/10.3390/e19090484
Received: 1 July 2017 / Revised: 2 September 2017 / Accepted: 8 September 2017 / Published: 13 September 2017
Cited by 2 | PDF Full-text (911 KB) | HTML Full-text | XML Full-text
Abstract
This paper considers power splitting (PS)-based simultaneous wireless information and power transfer (SWIPT) for multiple-input multiple-output (MIMO) interference channel networks where multiple transceiver pairs share the same frequency spectrum. As the PS model is adopted, an individual receiver splits the received signal into
[...] Read more.
This paper considers power splitting (PS)-based simultaneous wireless information and power transfer (SWIPT) for multiple-input multiple-output (MIMO) interference channel networks where multiple transceiver pairs share the same frequency spectrum. As the PS model is adopted, an individual receiver splits the received signal into two parts for information decoding (ID) and energy harvesting (EH), respectively. Aiming to minimize the total transmit power, transmit precoders, receive filters and PS ratios are jointly designed under a predefined signal-to-interference-plus-noise ratio (SINR) and EH constraints. The formulated joint transceiver design and power splitting problem is non-convex and thus difficult to solve directly. In order to effectively obtain its solution, the feasibility conditions of the formulated non-convex problem are first analyzed. Based on the analysis, an iterative algorithm is proposed by alternatively optimizing the transmitters together with the power splitting factors and the receivers based on semidefinite programming (SDP) relaxation. Moreover, considering the prohibitive computational cost of the SDP for practical applications, a low-complexity suboptimal scheme is proposed by separately designing interference-suppressing transceivers based on interference alignment (IA) and optimizing the transmit power allocation together with splitting factors. The transmit power allocation and receive power splitting problem is then recast as a convex optimization problem and solved efficiently. To further reduce the computational complexity, a low-complexity scheme is proposed by calculating the transmit power allocation and receive PS ratios in closed-form. Simulation results show the effectiveness of the proposed schemes in achieving SWIPT for MIMO interference channel (IC) networks. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessReview Born-Kothari Condensation for Fermions
Entropy 2017, 19(9), 479; https://doi.org/10.3390/e19090479
Received: 12 June 2017 / Revised: 2 September 2017 / Accepted: 6 September 2017 / Published: 13 September 2017
PDF Full-text (920 KB) | HTML Full-text | XML Full-text
Abstract
In the spirit of Bose–Einstein condensation, we present a detailed account of the statistical description of the condensation phenomena for a Fermi–Dirac gas following the works of Born and Kothari. For bosons, while the condensed phase below a certain critical temperature, permits macroscopic
[...] Read more.
In the spirit of Bose–Einstein condensation, we present a detailed account of the statistical description of the condensation phenomena for a Fermi–Dirac gas following the works of Born and Kothari. For bosons, while the condensed phase below a certain critical temperature, permits macroscopic occupation at the lowest energy single particle state, for fermions, due to Pauli exclusion principle, the condensed phase occurs only in the form of a single occupancy dense modes at the highest energy state. In spite of these rudimentary differences, our recent findings [Ghosh and Ray, 2017] identify the foregoing phenomenon as condensation-like coherence among fermions in an analogous way to Bose–Einstein condensate which is collectively described by a coherent matter wave. To reach the above conclusion, we employ the close relationship between the statistical methods of bosonic and fermionic fields pioneered by Cahill and Glauber. In addition to our previous results, we described in this mini-review that the highest momentum (energy) for individual fermions, prerequisite for the condensation process, can be specified in terms of the natural length and energy scales of the problem. The existence of such condensed phases, which are of obvious significance in the context of elementary particles, have also been scrutinized. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Open AccessArticle Use of the Principles of Maximum Entropy and Maximum Relative Entropy for the Determination of Uncertain Parameter Distributions in Engineering Applications
Entropy 2017, 19(9), 486; https://doi.org/10.3390/e19090486
Received: 31 July 2017 / Revised: 8 September 2017 / Accepted: 9 September 2017 / Published: 12 September 2017
Cited by 2 | PDF Full-text (8487 KB) | HTML Full-text | XML Full-text
Abstract
The determination of the probability distribution function (PDF) of uncertain input and model parameters in engineering application codes is an issue of importance for uncertainty quantification methods. One of the approaches that can be used for the PDF determination of input and model
[...] Read more.
The determination of the probability distribution function (PDF) of uncertain input and model parameters in engineering application codes is an issue of importance for uncertainty quantification methods. One of the approaches that can be used for the PDF determination of input and model parameters is the application of methods based on the maximum entropy principle (MEP) and the maximum relative entropy (MREP). These methods determine the PDF that maximizes the information entropy when only partial information about the parameter distribution is known, such as some moments of the distribution and its support. In addition, this paper shows the application of the MREP to update the PDF when the parameter must fulfill some technical specifications (TS) imposed by the regulations. Three computer programs have been developed: GEDIPA, which provides the parameter PDF using empirical distribution function (EDF) methods; UNTHERCO, which performs the Monte Carlo sampling on the parameter distribution; and DCP, which updates the PDF considering the TS and the MREP. Finally, the paper displays several applications and examples for the determination of the PDF applying the MEP and the MREP, and the influence of several factors on the PDF. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle Optomechanical Analogy for Toy Cosmology with Quantized Scale Factor
Entropy 2017, 19(9), 485; https://doi.org/10.3390/e19090485
Received: 30 June 2017 / Revised: 23 August 2017 / Accepted: 8 September 2017 / Published: 12 September 2017
PDF Full-text (397 KB) | HTML Full-text | XML Full-text
Abstract
The simplest cosmology—the Friedmann–Robertson–Walker–Lemaître (FRW) model— describes a spatially homogeneous and isotropic universe where the scale factor is the only dynamical parameter. Here we consider how quantized electromagnetic fields become entangled with the scale factor in a toy version of the FRW model.
[...] Read more.
The simplest cosmology—the Friedmann–Robertson–Walker–Lemaître (FRW) model— describes a spatially homogeneous and isotropic universe where the scale factor is the only dynamical parameter. Here we consider how quantized electromagnetic fields become entangled with the scale factor in a toy version of the FRW model. A system consisting of a photon, source, and detector is described in such a universe, and we find that the detection of a redshifted photon by the detector system constrains possible scale factor superpositions. Thus, measuring the redshift of the photon is equivalent to a weak measurement of the underlying cosmology. We also consider a potential optomechanical analogy system that would enable experimental exploration of these concepts. The analogy focuses on the effects of photon redshift measurement as a quantum back-action on metric variables, where the position of a movable mirror plays the role of the scale factor. By working in the rotating frame, an effective Hubble equation can be simulated with a simple free moving mirror. Full article
(This article belongs to the collection Quantum Information)
Figures

Figure 1

Open AccessArticle On the Fragility of Bulk Metallic Glass Forming Liquids
Entropy 2017, 19(9), 483; https://doi.org/10.3390/e19090483
Received: 25 August 2017 / Revised: 5 September 2017 / Accepted: 7 September 2017 / Published: 10 September 2017
Cited by 3 | PDF Full-text (10408 KB) | HTML Full-text | XML Full-text
Abstract
In contrast to pure metals and most non-glass forming alloys, metallic glass-formers are moderately strong liquids in terms of fragility. The notion of fragility of an undercooling liquid reflects the sensitivity of the viscosity of the liquid to temperature changes and describes the
[...] Read more.
In contrast to pure metals and most non-glass forming alloys, metallic glass-formers are moderately strong liquids in terms of fragility. The notion of fragility of an undercooling liquid reflects the sensitivity of the viscosity of the liquid to temperature changes and describes the degree of departure of the liquid kinetics from the Arrhenius equation. In general, the fragility of metallic glass-formers increases with the complexity of the alloy with differences between the alloy families, e.g., Pd-based alloys being more fragile than Zr-based alloys, which are more fragile than Mg-based alloys. Here, experimental data are assessed for 15 bulk metallic glasses-formers including the novel and technologically important systems based on Ni-Cr-Nb-P-B, Fe-Mo-Ni-Cr-P-C-B, and Au-Ag-Pd-Cu-Si. The data for the equilibrium viscosity are analyzed using the Vogel–Fulcher–Tammann (VFT) equation, the Mauro–Yue–Ellison–Gupta–Allan (MYEGA) equation, and the Adam–Gibbs approach based on specific heat capacity data. An overall larger trend of the excess specific heat for the more fragile supercooled liquids is experimentally observed than for the stronger liquids. Moreover, the stronger the glass, the higher the free enthalpy barrier to cooperative rearrangements is, suggesting the same microscopic origin and rigorously connecting the kinetic and thermodynamic aspects of fragility. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Graphical abstract

Open AccessArticle A Characterization of the Domain of Beta-Divergence and Its Connection to Bregman Variational Model
Entropy 2017, 19(9), 482; https://doi.org/10.3390/e19090482
Received: 20 July 2017 / Revised: 4 September 2017 / Accepted: 7 September 2017 / Published: 9 September 2017
PDF Full-text (653 KB) | HTML Full-text | XML Full-text
Abstract
In image and signal processing, the beta-divergence is well known as a similarity measure between two positive objects. However, it is unclear whether or not the distance-like structure of beta-divergence is preserved, if we extend the domain of the beta-divergence to the negative
[...] Read more.
In image and signal processing, the beta-divergence is well known as a similarity measure between two positive objects. However, it is unclear whether or not the distance-like structure of beta-divergence is preserved, if we extend the domain of the beta-divergence to the negative region. In this article, we study the domain of the beta-divergence and its connection to the Bregman-divergence associated with the convex function of Legendre type. In fact, we show that the domain of beta-divergence (and the corresponding Bregman-divergence) include negative region under the mild condition on the beta value. Additionally, through the relation between the beta-divergence and the Bregman-divergence, we can reformulate various variational models appearing in image processing problems into a unified framework, namely the Bregman variational model. This model has a strong advantage compared to the beta-divergence-based model due to the dual structure of the Bregman-divergence. As an example, we demonstrate how we can build up a convex reformulated variational model with a negative domain for the classic nonconvex problem, which usually appears in synthetic aperture radar image processing problems. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Entropy Analysis on Electro-Kinetically Modulated Peristaltic Propulsion of Magnetized Nanofluid Flow through a Microchannel
Entropy 2017, 19(9), 481; https://doi.org/10.3390/e19090481
Received: 4 August 2017 / Revised: 2 September 2017 / Accepted: 7 September 2017 / Published: 9 September 2017
Cited by 16 | PDF Full-text (5855 KB) | HTML Full-text | XML Full-text
Abstract
A theoretical and a mathematical model is presented to determine the entropy generation on electro-kinetically modulated peristaltic propulsion on the magnetized nanofluid flow through a microchannel with joule heating. The mathematical modeling is based on the energy, momentum, continuity, and entropy equation in
[...] Read more.
A theoretical and a mathematical model is presented to determine the entropy generation on electro-kinetically modulated peristaltic propulsion on the magnetized nanofluid flow through a microchannel with joule heating. The mathematical modeling is based on the energy, momentum, continuity, and entropy equation in the Cartesian coordinate system. The effects of viscous dissipation, heat absorption, magnetic field, and electrokinetic body force are also taken into account. The electric field terms are helpful to model the electrical potential terms by means of Poisson–Boltzmann equations, ionic Nernst–Planck equation, and Debye length approximation. A perturbation method has been applied to solve the coupled nonlinear partial differential equations and a series solution is obtained up to second order. The physical behavior of all the governing parameters is discussed for pressure rise, velocity profile, entropy profile, and temperature profile. Full article
(This article belongs to the Special Issue Entropy Generation in Nanofluid Flows)
Figures

Figure 1

Open AccessArticle Robust Biometric Authentication from an Information Theoretic Perspective
Entropy 2017, 19(9), 480; https://doi.org/10.3390/e19090480
Received: 22 June 2017 / Revised: 28 August 2017 / Accepted: 7 September 2017 / Published: 9 September 2017
Cited by 2 | PDF Full-text (327 KB) | HTML Full-text | XML Full-text
Abstract
Robust biometric authentication is studied from an information theoretic perspective. Compound sources are used to account for uncertainty in the knowledge of the source statistics and are further used to model certain attack classes. It is shown that authentication is robust against source
[...] Read more.
Robust biometric authentication is studied from an information theoretic perspective. Compound sources are used to account for uncertainty in the knowledge of the source statistics and are further used to model certain attack classes. It is shown that authentication is robust against source uncertainty and a special class of attacks under the strong secrecy condition. A single-letter characterization of the privacy secrecy capacity region is derived for the generated and chosen secret key model. Furthermore, the question is studied whether small variations of the compound source lead to large losses of the privacy secrecy capacity region. It is shown that biometric authentication is robust in the sense that its privacy secrecy capacity region depends continuously on the compound source. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle Coupled Effects of Turing and Neimark-Sacker Bifurcations on Vegetation Pattern Self-Organization in a Discrete Vegetation-Sand Model
Entropy 2017, 19(9), 478; https://doi.org/10.3390/e19090478
Received: 12 July 2017 / Revised: 11 August 2017 / Accepted: 4 September 2017 / Published: 8 September 2017
PDF Full-text (20915 KB) | HTML Full-text | XML Full-text
Abstract
Wind-induced vegetation patterns were proposed a long time ago but only recently a dynamic vegetation-sand relationship has been established. In this research, we transformed the continuous vegetation-sand model into a discrete model. Fixed points and stability analyses were then studied. Bifurcation analyses are
[...] Read more.
Wind-induced vegetation patterns were proposed a long time ago but only recently a dynamic vegetation-sand relationship has been established. In this research, we transformed the continuous vegetation-sand model into a discrete model. Fixed points and stability analyses were then studied. Bifurcation analyses are done around the fixed point, including Neimark-Sacker and Turing bifurcation. Then we simulated the parameter space for both bifurcations. Based on the bifurcation conditions, simulations are carried out around the bifurcation point. Simulation results showed that Neimark-Sacker bifurcation and Turing bifurcation can induce the self-organization of complex vegetation patterns, among which labyrinth and striped patterns are the key results that can be presented by the continuous model. Under the coupled effects of the two bifurcations, simulation results show that vegetation patterns can also be self-organized, but vegetation type changed. The type of the patterns can be Turing type, Neimark-Sacker type, or some other special type. The difference may depend on the relative intensity of each bifurcation. The calculation of entropy may help understand the variance of pattern types. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1a

Open AccessArticle Implications of Coupling in Quantum Thermodynamic Machines
Entropy 2017, 19(9), 442; https://doi.org/10.3390/e19090442
Received: 5 July 2017 / Revised: 13 August 2017 / Accepted: 17 August 2017 / Published: 8 September 2017
Cited by 2 | PDF Full-text (490 KB) | HTML Full-text | XML Full-text
Abstract
We study coupled quantum systems as the working media of thermodynamic machines. Under a suitable phase-space transformation, the coupled systems can be expressed as a composition of independent subsystems. We find that for the coupled systems, the figures of merit, that is the
[...] Read more.
We study coupled quantum systems as the working media of thermodynamic machines. Under a suitable phase-space transformation, the coupled systems can be expressed as a composition of independent subsystems. We find that for the coupled systems, the figures of merit, that is the efficiency for engine and the coefficient of performance for refrigerator, are bounded (both from above and from below) by the corresponding figures of merit of the independent subsystems. We also show that the optimum work extractable from a coupled system is upper bounded by the optimum work obtained from the uncoupled system, thereby showing that the quantum correlations do not help in optimal work extraction. Further, we study two explicit examples; coupled spin- 1 / 2 systems and coupled quantum oscillators with analogous interactions. Interestingly, for particular kind of interactions, the efficiency of the coupled oscillators outperforms that of the coupled spin- 1 / 2 systems when they work as heat engines. However, for the same interaction, the coefficient of performance behaves in a reverse manner, while the systems work as the refrigerator. Thus, the same coupling can cause opposite effects in the figures of merit of heat engine and refrigerator. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Figures

Figure 1

Open AccessArticle A Fuzzy-Based Adaptive Streaming Algorithm for Reducing Entropy Rate of DASH Bitrate Fluctuation to Improve Mobile Quality of Service
Entropy 2017, 19(9), 477; https://doi.org/10.3390/e19090477
Received: 27 July 2017 / Revised: 3 September 2017 / Accepted: 4 September 2017 / Published: 7 September 2017
Cited by 1 | PDF Full-text (1032 KB) | HTML Full-text | XML Full-text
Abstract
Dynamic adaptive streaming over Hypertext Transfer Protocol (HTTP) is an advanced technology in video streaming to deal with the uncertainty of network states. However, this technology has one drawback as the network states frequently and continuously change. The quality of a video streaming
[...] Read more.
Dynamic adaptive streaming over Hypertext Transfer Protocol (HTTP) is an advanced technology in video streaming to deal with the uncertainty of network states. However, this technology has one drawback as the network states frequently and continuously change. The quality of a video streaming fluctuates along with the network changes, and it might reduce the quality of service. In recent years, many researchers have proposed several adaptive streaming algorithms to reduce such changes. However, these algorithms only consider the current state of a network. Thus, these algorithms might result in inaccurate estimates of a video quality in the near term. Therefore, in this paper, we propose a method using fuzzy logic and a mathematics moving average technique, in order to reduce mobile video quality fluctuation in Dynamic Adaptive Streaming over HTTP (DASH). First, we calculate the moving average of the bandwidth and buffer values for a given period. On the basis of differences between real and average values, we propose a fuzzy logic system to deduce the value of the video quality representation for the next request. In addition, we use the entropy rate of a bandwidth measurement sequence to measure the predictable/stabilization of our method. The experiment results show that our proposed method reduces video quality fluctuation as well as improves 40% of bandwidth utilization compared to existing methods. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Entropies of Weighted Sums in Cyclic Groups and an Application to Polar Codes
Entropy 2017, 19(9), 235; https://doi.org/10.3390/e19090235
Received: 14 February 2017 / Revised: 5 April 2017 / Accepted: 5 April 2017 / Published: 7 September 2017
PDF Full-text (309 KB) | HTML Full-text | XML Full-text
Abstract
In this note, the following basic question is explored: in a cyclic group, how are the Shannon entropies of the sum and difference of i.i.d. random variables related to each other? For the integer group, we show that they can differ by any
[...] Read more.
In this note, the following basic question is explored: in a cyclic group, how are the Shannon entropies of the sum and difference of i.i.d. random variables related to each other? For the integer group, we show that they can differ by any real number additively, but not too much multiplicatively; on the other hand, for Z / 3 Z , the entropy of the difference is always at least as large as that of the sum. These results are closely related to the study of more-sums-than-differences (i.e., MSTD) sets in additive combinatorics. We also investigate polar codes for q-ary input channels using non-canonical kernels to construct the generator matrix and present applications of our results to constructing polar codes with significantly improved error probability compared to the canonical construction. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Figures

Figure 1

Back to Top