Next Issue
Volume 17, August
Previous Issue
Volume 17, June
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 17, Issue 7 (July 2015) – 36 articles , Pages 4485-5144

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
1803 KiB  
Article
Setting Diverging Colors for a Large-Scale Hypsometric Lunar Map Based on Entropy
by Xingguo Zeng, Lingli Mu, Jianjun Liu and Yiman Yang
Entropy 2015, 17(7), 5133-5144; https://doi.org/10.3390/e17075133 - 22 Jul 2015
Cited by 2 | Viewed by 4239
Abstract
A hypsometric map is a type of map used to represent topographic characteristics by filling different map areas with diverging colors. The setting of appropriate diverging colors is essential for the map to reveal topographic details. When lunar real environmental exploration programs are [...] Read more.
A hypsometric map is a type of map used to represent topographic characteristics by filling different map areas with diverging colors. The setting of appropriate diverging colors is essential for the map to reveal topographic details. When lunar real environmental exploration programs are performed, large-scale hypsometric maps with a high resolution and greater topographic detail are helpful. Compared to the situation on Earth, fewer lunar exploration objects are available, and the topographic waviness is smaller at a large scale, indicating that presenting the topographic details using traditional hypsometric map-making methods may be difficult. To solve this problem, we employed the Chang’E2 (CE2) topographic and imagery data with a resolution of 7 m and developed a new hypsometric map-making method by setting the diverging colors based on information entropy. The resulting map showed that this method is suitable for presenting the topographic details and might be useful for developing a better understanding of the environment of the lunar surface. Full article
(This article belongs to the Special Issue Entropy, Utility, and Logical Reasoning)
Show Figures

737 KiB  
Article
An Entropy-Based Approach to Path Analysis of Structural Generalized Linear Models: A Basic Idea
by Nobuoki Eshima, Minoru Tabata, Claudio Giovanni Borroni and Yutaka Kano
Entropy 2015, 17(7), 5117-5132; https://doi.org/10.3390/e17075117 - 22 Jul 2015
Cited by 3 | Viewed by 6771
Abstract
A path analysis method for causal systems based on generalized linear models is proposed by using entropy. A practical example is introduced, and a brief explanation of the entropy coefficient of determination is given. Direct and indirect effects of explanatory variables are discussed [...] Read more.
A path analysis method for causal systems based on generalized linear models is proposed by using entropy. A practical example is introduced, and a brief explanation of the entropy coefficient of determination is given. Direct and indirect effects of explanatory variables are discussed as log odds ratios, i.e., relative information, and a method for summarizing the effects is proposed. The example dataset is re-analyzed by using the method. Full article
(This article belongs to the Special Issue Entropy, Utility, and Logical Reasoning)
Show Figures

3223 KiB  
Article
Analytic Exact Upper Bound for the Lyapunov Dimension of the Shimizu–Morioka System
by Gennady A. Leonov, Tatyana A. Alexeeva and Nikolay V. Kuznetsov
Entropy 2015, 17(7), 5101-5116; https://doi.org/10.3390/e17075101 - 22 Jul 2015
Cited by 10 | Viewed by 4895
Abstract
In applied investigations, the invariance of the Lyapunov dimension under a diffeomorphism is often used. However, in the case of irregular linearization, this fact was not strictly considered in the classical works. In the present work, the invariance of the Lyapunov dimension under [...] Read more.
In applied investigations, the invariance of the Lyapunov dimension under a diffeomorphism is often used. However, in the case of irregular linearization, this fact was not strictly considered in the classical works. In the present work, the invariance of the Lyapunov dimension under diffeomorphism is demonstrated in the general case. This fact is used to obtain the analytic exact upper bound of the Lyapunov dimension of an attractor of the Shimizu–Morioka system. Full article
(This article belongs to the Special Issue Recent Advances in Chaos Theory and Complex Networks)
Show Figures

915 KiB  
Article
Averaged Extended Tree Augmented Naive Classifier
by Aaron Meehan and Cassio P. De Campos
Entropy 2015, 17(7), 5085-5100; https://doi.org/10.3390/e17075085 - 21 Jul 2015
Cited by 7 | Viewed by 4658
Abstract
This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN), which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN) and Averaged One-Dependence Estimator (AODE) classifiers. We describe the main properties of [...] Read more.
This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN), which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN) and Averaged One-Dependence Estimator (AODE) classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

1675 KiB  
Article
Minimal Rényi–Ingarden–Urbanik Entropy of Multipartite Quantum States
by Marco Enríquez, Zbigniew Puchała and Karol Życzkowski
Entropy 2015, 17(7), 5063-5084; https://doi.org/10.3390/e17075063 - 20 Jul 2015
Cited by 12 | Viewed by 5420
Abstract
We study the entanglement of a pure state of a composite quantum system consisting of several subsystems with d levels each. It can be described by the Rényi–Ingarden–Urbanik entropy Sq of a decomposition of the state in a product basis, minimized over [...] Read more.
We study the entanglement of a pure state of a composite quantum system consisting of several subsystems with d levels each. It can be described by the Rényi–Ingarden–Urbanik entropy Sq of a decomposition of the state in a product basis, minimized over all local unitary transformations. In the case q = 0, this quantity becomes a function of the rank of the tensor representing the state, while in the limit q → ∞, the entropy becomes related to the overlap with the closest separable state and the geometric measure of entanglement. For any bipartite system, the entropy S1 coincides with the standard entanglement entropy. We analyze the distribution of the minimal entropy for random states of three- and four-qubit systems. In the former case, the distribution of the three-tangle is studied and some of its moments are evaluated, while in the latter case, we analyze the distribution of the hyperdeterminant. The behavior of the maximum overlap of a three-qudit system with the closest separable state is also investigated in the asymptotic limit. Full article
(This article belongs to the Special Issue Quantum Computation and Information: Multi-Particle Aspects)
Show Figures

302 KiB  
Article
Evaluation of the Atmospheric Chemical Entropy Production of Mars
by Alfonso Delgado-Bonal and F. Javier Martín-Torres
Entropy 2015, 17(7), 5047-5062; https://doi.org/10.3390/e17075047 - 20 Jul 2015
Viewed by 5793
Abstract
Thermodynamic disequilibrium is a necessary situation in a system in which complex emergent structures are created and maintained. It is known that most of the chemical disequilibrium, a particular type of thermodynamic disequilibrium, in Earth’s atmosphere is a consequence of life. We have [...] Read more.
Thermodynamic disequilibrium is a necessary situation in a system in which complex emergent structures are created and maintained. It is known that most of the chemical disequilibrium, a particular type of thermodynamic disequilibrium, in Earth’s atmosphere is a consequence of life. We have developed a thermochemical model for the Martian atmosphere to analyze the disequilibrium by chemical reactions calculating the entropy production. It follows from the comparison with the Earth atmosphere that the magnitude of the entropy produced by the recombination reaction forming O3 (O + O2 + CO2 ⥦ O3 + CO2) in the atmosphere of the Earth is larger than the entropy produced by the dominant set of chemical reactions considered for Mars, as a consequence of the low density and the poor variety of species of the Martian atmosphere. If disequilibrium is needed to create and maintain self-organizing structures in a system, we conclude that the current Martian atmosphere is unable to support large physico-chemical structures, such as those created on Earth. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Graphical abstract

141 KiB  
Reply
Reply to C. Tsallis’ “Conceptual Inadequacy of the Shore and Johnson Axioms for Wide Classes of Complex Systems”
by Steve Pressé, Kingshuk Ghosh, Julian Lee and Ken A. Dill
Entropy 2015, 17(7), 5043-5046; https://doi.org/10.3390/e17075043 - 17 Jul 2015
Cited by 17 | Viewed by 4713
Abstract
In a recent PRL (2013, 111, 180604), we invoked the Shore and Johnson axioms which demonstrate that the least-biased way to infer probability distributions fpig from data is to maximize the Boltzmann-Gibbs entropy. We then showed which biases are introduced in models obtained [...] Read more.
In a recent PRL (2013, 111, 180604), we invoked the Shore and Johnson axioms which demonstrate that the least-biased way to infer probability distributions fpig from data is to maximize the Boltzmann-Gibbs entropy. We then showed which biases are introduced in models obtained by maximizing nonadditive entropies. A rebuttal of our work appears in entropy (2015, 17, 2853) and argues that the Shore and Johnson axioms are inapplicable to a wide class of complex systems. Here we highlight the errors in this reasoning. Full article
(This article belongs to the Section Complexity)
3877 KiB  
Article
The Critical Point Entanglement and Chaos in the Dicke Model
by Lina Bao, Feng Pan, Jing Lu and Jerry P. Draayer
Entropy 2015, 17(7), 5022-5042; https://doi.org/10.3390/e17075022 - 16 Jul 2015
Cited by 5 | Viewed by 5761
Abstract
Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS). Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with [...] Read more.
Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS). Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with a finite number of atoms as functions of the coupling parameter are calculated. It is shown that the entanglement measure defined in terms of the normalized von Neumann entropy of the reduced density matrix of the atoms reaches its maximum value at the critical point of the quantum phase transition where the system is most chaotic. Noticeable change in the Shannon information entropy near or at the critical point of the quantum phase transition is also observed. In addition, the quantum phase transition may be observed not only in the ground state mean photon number and the ground state atomic inversion as shown previously, but also in fluctuations of these two quantities in the ground state, especially in the atomic inversion fluctuation. Full article
(This article belongs to the Special Issue Quantum Computation and Information: Multi-Particle Aspects)
Show Figures

2851 KiB  
Article
A Novel Method for Seismogenic Zoning Based on Triclustering: Application to the Iberian Peninsula
by Francisco Martínez-Álvarez, David Gutiérrez-Avilés, Antonio Morales-Esteban, Jorge Reyes, José L. Amaro-Mellado and Cristina Rubio-Escudero
Entropy 2015, 17(7), 5000-5021; https://doi.org/10.3390/e17075000 - 16 Jul 2015
Cited by 18 | Viewed by 4607
Abstract
A previous definition of seismogenic zones is required to do a probabilistic seismic hazard analysis for areas of spread and low seismic activity. Traditional zoning methods are based on the available seismic catalog and the geological structures. It is admitted that thermal and [...] Read more.
A previous definition of seismogenic zones is required to do a probabilistic seismic hazard analysis for areas of spread and low seismic activity. Traditional zoning methods are based on the available seismic catalog and the geological structures. It is admitted that thermal and resistant parameters of the crust provide better criteria for zoning. Nonetheless, the working out of the rheological profiles causes a great uncertainty. This has generated inconsistencies, as different zones have been proposed for the same area. A new method for seismogenic zoning by means of triclustering is proposed in this research. The main advantage is that it is solely based on seismic data. Almost no human decision is made, and therefore, the method is nearly non-biased. To assess its performance, the method has been applied to the Iberian Peninsula, which is characterized by the occurrence of small to moderate magnitude earthquakes. The catalog of the National Geographic Institute of Spain has been used. The output map is checked for validity with the geology. Moreover, a geographic information system has been used for two purposes. First, the obtained zones have been depicted within it. Second, the data have been used to calculate the seismic parameters (b-value, annual rate). Finally, the results have been compared to Kohonen’s self-organizing maps. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

1078 KiB  
Article
Maximum Entropy Estimation of Probability Distribution of Variables in Higher Dimensions from Lower Dimensional Data
by Jayajit Das, Sayak Mukherjee and Susan E. Hodge
Entropy 2015, 17(7), 4986-4999; https://doi.org/10.3390/e17074986 - 15 Jul 2015
Cited by 4 | Viewed by 4471
Abstract
A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. [...] Read more.
A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt) approach that estimates Q(x) based only on the available data, namely, P(y). The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples. Full article
Show Figures

458 KiB  
Article
Lag Synchronization of Complex Lorenz System with Applications to Communication
by Fangfang Zhang
Entropy 2015, 17(7), 4974-4985; https://doi.org/10.3390/e17074974 - 15 Jul 2015
Cited by 27 | Viewed by 4205
Abstract
In communication, the signal at the receiver end at time t is the signal from the transmitter side at time t −Τ (Τ ≥ 0 and it is the lag time) as the time lag of transmission. Therefore, lag synchronization (LS) is [...] Read more.
In communication, the signal at the receiver end at time t is the signal from the transmitter side at time t −Τ (Τ ≥ 0 and it is the lag time) as the time lag of transmission. Therefore, lag synchronization (LS) is more accurate than complete synchronization to design communication scheme. Taking complex Lorenz system as an example, we design the LS controller according to error feedback. Using chaotic masking, we propose a communication scheme based on LS and independent component analysis (ICA). It is suitable to transmit multiple messages with all kinds of amplitudes and it has the ability of anti-noise. Numerical simulations verify the feasibility and effectiveness of the presented schemes. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

2868 KiB  
Article
Broad Niche Overlap between Invasive Nile Tilapia Oreochromis niloticus and Indigenous Congenerics in Southern Africa: Should We be Concerned?
by Tsungai A. Zengeya, Anthony J. Booth and Christian T. Chimimba
Entropy 2015, 17(7), 4959-4973; https://doi.org/10.3390/e17074959 - 14 Jul 2015
Cited by 28 | Viewed by 6939
Abstract
This study developed niche models for the native ranges of Oreochromis andersonii, O. mortimeri, and O. mossambicus, and assessed how much of their range is climatically suitable for the establishment of O. niloticus, and then reviewed the conservation implications [...] Read more.
This study developed niche models for the native ranges of Oreochromis andersonii, O. mortimeri, and O. mossambicus, and assessed how much of their range is climatically suitable for the establishment of O. niloticus, and then reviewed the conservation implications for indigenous congenerics as a result of overlap with O. niloticus based on documented congeneric interactions. The predicted potential geographical range of O. niloticus reveals a broad climatic suitability over most of southern Africa and overlaps with all the endemic congenerics. This is of major conservation concern because six of the eight river systems predicted to be suitable for O. niloticus have already been invaded and now support established populations. Oreochromis niloticus has been implicated in reducing the abundance of indigenous species through competitive exclusion and hybridisation. Despite these well-documented adverse ecological effects, O. niloticus remains one of the most widely cultured and propagated fish species in aquaculture and stock enhancements in the southern Africa sub-region. Aquaculture is perceived as a means of protein security, poverty alleviation, and economic development and, as such, any future decisions on its introduction will be based on the trade-off between socio-economic benefits and potential adverse ecological effects. Full article
(This article belongs to the Special Issue Entropy in Hydrology)
Show Figures

2084 KiB  
Article
Identity Authentication over Noisy Channels
by Fanfan Zheng, Zhiqing Xiao, Shidong Zhou, Jing Wang and Lianfen Huang
Entropy 2015, 17(7), 4940-4958; https://doi.org/10.3390/e17074940 - 14 Jul 2015
Cited by 4 | Viewed by 4824
Abstract
Identity authentication is the process of verifying users’ validity. Unlike classical key-based authentications, which are built on noiseless channels, this paper introduces a general analysis and design framework for identity authentication over noisy channels. Specifically, the authentication scenarios of single time and multiple [...] Read more.
Identity authentication is the process of verifying users’ validity. Unlike classical key-based authentications, which are built on noiseless channels, this paper introduces a general analysis and design framework for identity authentication over noisy channels. Specifically, the authentication scenarios of single time and multiple times are investigated. For each scenario, the lower bound on the opponent’s success probability is derived, and it is smaller than the classical identity authentication’s. In addition, it can remain the same, even if the secret key is reused. Remarkably, the Cartesian authentication code proves to be helpful for hiding the secret key to maximize the secrecy performance. Finally, we show a potential application of this authentication technique. Full article
Show Figures

283 KiB  
Article
Fisher Information Properties
by Pablo Zegers
Entropy 2015, 17(7), 4918-4939; https://doi.org/10.3390/e17074918 - 13 Jul 2015
Cited by 35 | Viewed by 8354
Abstract
A set of Fisher information properties are presented in order to draw a parallel with similar properties of Shannon differential entropy. Already known properties are presented together with new ones, which include: (i) a generalization of mutual information for Fisher information; (ii) a [...] Read more.
A set of Fisher information properties are presented in order to draw a parallel with similar properties of Shannon differential entropy. Already known properties are presented together with new ones, which include: (i) a generalization of mutual information for Fisher information; (ii) a new proof that Fisher information increases under conditioning; (iii) showing that Fisher information decreases in Markov chains; and (iv) bound estimation error using Fisher information. This last result is especially important, because it completes Fano’s inequality, i.e., a lower bound for estimation error, showing that Fisher information can be used to define an upper bound for this error. In this way, it is shown that Shannon’s differential entropy, which quantifies the behavior of the random variable, and the Fisher information, which quantifies the internal structure of the density function that defines the random variable, can be used to characterize the estimation error. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
2005 KiB  
Article
Informational and Causal Architecture of Discrete-Time Renewal Processes
by Sarah E. Marzen and James P. Crutchfield
Entropy 2015, 17(7), 4891-4917; https://doi.org/10.3390/e17074891 - 13 Jul 2015
Cited by 26 | Viewed by 5762
Abstract
Renewal processes are broadly used to model stochastic behavior consisting of isolated events separated by periods of quiescence, whose durations are specified by a given probability law. Here, we identify the minimal sufficient statistic for their prediction (the set of causal states), calculate [...] Read more.
Renewal processes are broadly used to model stochastic behavior consisting of isolated events separated by periods of quiescence, whose durations are specified by a given probability law. Here, we identify the minimal sufficient statistic for their prediction (the set of causal states), calculate the historical memory capacity required to store those states (statistical complexity), delineate what information is predictable (excess entropy), and decompose the entropy of a single measurement into that shared with the past, future, or both. The causal state equivalence relation defines a new subclass of renewal processes with a finite number of causal states despite having an unbounded interevent count distribution. We use the resulting formulae to analyze the output of the parametrized Simple Nonunifilar Source, generated by a simple two-state hidden Markov model, but with an infinite-state machine presentation. All in all, the results lay the groundwork for analyzing more complex processes with infinite statistical complexity and infinite excess entropy. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

582 KiB  
Article
Entropy, Information and Complexity or Which Aims the Arrow of Time?
by George E. Mikhailovsky and Alexander P. Levich
Entropy 2015, 17(7), 4863-4890; https://doi.org/10.3390/e17074863 - 10 Jul 2015
Cited by 20 | Viewed by 9371
Abstract
In this article, we analyze the interrelationships among such notions as entropy, information, complexity, order and chaos and show using the theory of categories how to generalize the second law of thermodynamics as a law of increasing generalized entropy or a general law [...] Read more.
In this article, we analyze the interrelationships among such notions as entropy, information, complexity, order and chaos and show using the theory of categories how to generalize the second law of thermodynamics as a law of increasing generalized entropy or a general law of complification. This law could be applied to any system with morphisms, including all of our universe and its subsystems. We discuss how such a general law and other laws of nature drive the evolution of the universe, including physicochemical and biological evolutions. In addition, we determine eliminating selection in physicochemical evolution as an extremely simplified prototype of natural selection. Laws of nature do not allow complexity and entropy to reach maximal values by generating structures. One could consider them as a kind of “breeder” of such selection. Full article
Show Figures

268 KiB  
Article
Faster Together: Collective Quantum Search
by Demosthenes Ellinas and Christos Konstandakis
Entropy 2015, 17(7), 4838-4862; https://doi.org/10.3390/e17074838 - 10 Jul 2015
Cited by 3 | Viewed by 4056
Abstract
Joining independent quantum searches provides novel collective modes of quantum search (merging) by utilizing the algorithm’s underlying algebraic structure. If n quantum searches, each targeting a single item, join the domains of their classical oracle functions and sum their Hilbert spaces (merging), instead [...] Read more.
Joining independent quantum searches provides novel collective modes of quantum search (merging) by utilizing the algorithm’s underlying algebraic structure. If n quantum searches, each targeting a single item, join the domains of their classical oracle functions and sum their Hilbert spaces (merging), instead of acting independently (concatenation), then they achieve a reduction of the search complexity by factor O(√n). Full article
(This article belongs to the Special Issue Quantum Computation and Information: Multi-Particle Aspects)
Show Figures

852 KiB  
Article
Power-Type Functions of Prediction Error of Sea Level Time Series
by Ming Li, Yuanchun Li and Jianxing Leng
Entropy 2015, 17(7), 4809-4837; https://doi.org/10.3390/e17074809 - 09 Jul 2015
Cited by 11 | Viewed by 5410
Abstract
This paper gives the quantitative relationship between prediction error and given past sample size in our research of sea level time series. The present result exhibits that the prediction error of sea level time series in terms of given past sample size follows [...] Read more.
This paper gives the quantitative relationship between prediction error and given past sample size in our research of sea level time series. The present result exhibits that the prediction error of sea level time series in terms of given past sample size follows decayed power functions, providing a quantitative guideline for the quality control of sea level prediction. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory I)
Show Figures

1046 KiB  
Article
Modeling and Analysis of Entropy Generation in Light Heating of Nanoscaled Silicon and Germanium Thin Films
by José Ernesto Nájera-Carpio, Federico Vázquez and Aldo Figueroa
Entropy 2015, 17(7), 4786-4808; https://doi.org/10.3390/e17074786 - 09 Jul 2015
Viewed by 4764
Abstract
In this work, the irreversible processes in light heating of Silicon (Si) and Germanium (Ge) thin films are examined. Each film is exposed to light irradiation with radiative and convective boundary conditions. Heat, electron and hole transport and generation-recombination processes of electron-hole pairs [...] Read more.
In this work, the irreversible processes in light heating of Silicon (Si) and Germanium (Ge) thin films are examined. Each film is exposed to light irradiation with radiative and convective boundary conditions. Heat, electron and hole transport and generation-recombination processes of electron-hole pairs are studied in terms of a phenomenological model obtained from basic principles of irreversible thermodynamics. We present an analysis of the contributions to the entropy production in the stationary state due to the dissipative effects associated with electron and hole transport, generation-recombination of electron-hole pairs as well as heat transport. The most significant contribution to the entropy production comes from the interaction of light with the medium in both Si and Ge. This interaction includes two processes, namely, the generation of electron-hole pairs and the transferring of energy from the absorbed light to the lattice. In Si the following contribution in magnitude comes from the heat transport. In Ge all the remaining contributions to entropy production have nearly the same order of magnitude. The results are compared and explained addressing the differences in the magnitude of the thermodynamic forces, Onsager’s coefficients and transport properties of Si and Ge. Full article
(This article belongs to the Special Issue Entropy Generation in Thermal Systems and Processes 2015)
Show Figures

Graphical abstract

872 KiB  
Article
Fractional Differential Texture Descriptors Based on the Machado Entropy for Image Splicing Detection
by Rabha W. Ibrahim, Zahra Moghaddasi, Hamid A. Jalab and Rafidah Md Noor
Entropy 2015, 17(7), 4775-4785; https://doi.org/10.3390/e17074775 - 08 Jul 2015
Cited by 30 | Viewed by 5228
Abstract
Image splicing is a common operation in image forgery. Different techniques of image splicing detection have been utilized to regain people’s trust. This study introduces a texture enhancement technique involving the use of fractional differential masks based on the Machado entropy. The masks [...] Read more.
Image splicing is a common operation in image forgery. Different techniques of image splicing detection have been utilized to regain people’s trust. This study introduces a texture enhancement technique involving the use of fractional differential masks based on the Machado entropy. The masks slide over the tampered image, and each pixel of the tampered image is convolved with the fractional mask weight window on eight directions. Consequently, the fractional differential texture descriptors are extracted using the gray-level co-occurrence matrix for image splicing detection. The support vector machine is used as a classifier that distinguishes between authentic and spliced images. Results prove that the achieved improvements of the proposed algorithm are compatible with other splicing detection methods. Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)
Show Figures

436 KiB  
Article
H Control for Markov Jump Systems with Nonlinear Noise Intensity Function and Uncertain Transition Rates
by Xiaonian Wang and Yafeng Guo
Entropy 2015, 17(7), 4762-4774; https://doi.org/10.3390/e17074762 - 06 Jul 2015
Cited by 2 | Viewed by 3765
Abstract
The problem of robust H control is investigated for Markov jump systems with nonlinear noise intensity function and uncertain transition rates. A robust H performance criterion is developed for the given systems for the first time. Based on the developed performance [...] Read more.
The problem of robust H control is investigated for Markov jump systems with nonlinear noise intensity function and uncertain transition rates. A robust H performance criterion is developed for the given systems for the first time. Based on the developed performance criterion, the desired H state-feedback controller is also designed, which guarantees the robust H performance of the closed-loop system. All the conditions are in terms of linear matrix inequalities (LMIs), and hence they can be readily solved by any LMI solver. Finally, a numerical example is provided to demonstrate the effectiveness of the proposed methods. Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)
Show Figures

1677 KiB  
Article
Energetic and Exergetic Analysis of an Ejector-Expansion Refrigeration Cycle Using the Working Fluid R32
by Zhenying Zhang, Lirui Tong, Li Chang, Yanhua Chen and Xingguo Wang
Entropy 2015, 17(7), 4744-4761; https://doi.org/10.3390/e17074744 - 06 Jul 2015
Cited by 17 | Viewed by 6393
Abstract
The performance characteristics of an ejector-expansion refrigeration cycle (EEC) using R32 have been investigated in comparison with that using R134a. The coefficient of performance (COP), the exergy destruction, the exergy efficiency and the suction nozzle pressure drop (SNPD) are discussed. The results show [...] Read more.
The performance characteristics of an ejector-expansion refrigeration cycle (EEC) using R32 have been investigated in comparison with that using R134a. The coefficient of performance (COP), the exergy destruction, the exergy efficiency and the suction nozzle pressure drop (SNPD) are discussed. The results show that the application of an ejector instead of a throttle valve in R32 cycle decreases the cycle’s total exergy destruction by 8.84%–15.84% in comparison with the basic cycle (BC). The R32 EEC provides 5.22%–13.77% COP improvement and 5.13%–13.83% exergy efficiency improvement respectively over the BC for the given ranges of evaporating and condensing temperatures. There exists an optimum suction nozzle pressure drop (SNPD) which gives a maximum system COP and volumetric cooling capacity (VCC) under a specified condition. The value of the optimum SNPD mainly depends on the efficiencies of the ejector components, but is virtually independent of evaporating temperature and condensing temperature. In addition, the improvement of the component efficiency, especially the efficiencies of diffusion nozzle and the motive nozzle, can enhance the EEC performance. Full article
Show Figures

399 KiB  
Article
Asymptotic Description of Neural Networks with Correlated Synaptic Weights
by Olivier Faugeras and James MacLaurin
Entropy 2015, 17(7), 4701-4743; https://doi.org/10.3390/e17074701 - 06 Jul 2015
Cited by 9 | Viewed by 4951
Abstract
We study the asymptotic law of a network of interacting neurons when the number of neurons becomes infinite. Given a completely connected network of neurons in which the synaptic weights are Gaussian correlated random variables, we describe the asymptotic law of the network [...] Read more.
We study the asymptotic law of a network of interacting neurons when the number of neurons becomes infinite. Given a completely connected network of neurons in which the synaptic weights are Gaussian correlated random variables, we describe the asymptotic law of the network when the number of neurons goes to infinity. We introduce the process-level empirical measure of the trajectories of the solutions to the equations of the finite network of neurons and the averaged law (with respect to the synaptic weights) of the trajectories of the solutions to the equations of the network of neurons. The main result of this article is that the image law through the empirical measure satisfies a large deviation principle with a good rate function which is shown to have a unique global minimum. Our analysis of the rate function allows us also to characterize the limit measure as the image of a stationary Gaussian measure defined on a transformed set of trajectories. Full article
(This article belongs to the Special Issue Entropic Aspects in Statistical Physics of Complex Systems)
1597 KiB  
Article
Geometric Interpretation of Surface Tension Equilibrium in Superhydrophobic Systems
by Michael Nosonovsky and Rahul Ramachandran
Entropy 2015, 17(7), 4684-4700; https://doi.org/10.3390/e17074684 - 06 Jul 2015
Cited by 28 | Viewed by 11880
Abstract
Surface tension and surface energy are closely related, although not identical concepts. Surface tension is a generalized force; unlike a conventional mechanical force, it is not applied to any particular body or point. Using this notion, we suggest a simple geometric interpretation of [...] Read more.
Surface tension and surface energy are closely related, although not identical concepts. Surface tension is a generalized force; unlike a conventional mechanical force, it is not applied to any particular body or point. Using this notion, we suggest a simple geometric interpretation of the Young, Wenzel, Cassie, Antonoff and Girifalco–Good equations for the equilibrium during wetting. This approach extends the traditional concept of Neumann’s triangle. Substances are presented as points, while tensions are vectors connecting the points, and the equations and inequalities of wetting equilibrium obtain simple geometric meaning with the surface roughness effect interpreted as stretching of corresponding vectors; surface heterogeneity is their linear combination, and contact angle hysteresis is rotation. We discuss energy dissipation mechanisms during wetting due to contact angle hysteresis, the superhydrophobicity and the possible entropic nature of the surface tension. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics)
Show Figures

1278 KiB  
Article
A New Feature Extraction Method Based on the Information Fusion of Entropy Matrix and Covariance Matrix and Its Application in Face Recognition
by Shunfang Wang and Ping Liu
Entropy 2015, 17(7), 4664-4683; https://doi.org/10.3390/e17074664 - 03 Jul 2015
Cited by 6 | Viewed by 4759
Abstract
The classic principal components analysis (PCA), kernel PCA (KPCA) and linear discriminant analysis (LDA) feature extraction methods evaluate the importance of components according to their covariance contribution, not considering the entropy contribution, which is important supplementary information for the covariance. To further improve [...] Read more.
The classic principal components analysis (PCA), kernel PCA (KPCA) and linear discriminant analysis (LDA) feature extraction methods evaluate the importance of components according to their covariance contribution, not considering the entropy contribution, which is important supplementary information for the covariance. To further improve the covariance-based methods such as PCA (or KPCA), this paper firstly proposed an entropy matrix to load the uncertainty information of random variables similar to the covariance matrix loading the variation information in PCA. Then an entropy-difference matrix was used as a weighting matrix for transforming the original training images. This entropy-difference weighting (EW) matrix not only made good use of the local information of the training samples, contrast to the global method of PCA, but also considered the category information similar to LDA idea. Then the EW method was integrated with PCA (or KPCA), to form new feature extracting method. The new method was used for face recognition with the nearest neighbor classifier. The experimental results based on the ORL and Yale databases showed that the proposed method with proper threshold parameters reached higher recognition rates than the usual PCA (or KPCA) methods. Full article
Show Figures

1350 KiB  
Article
Continuous Variable Quantum Key Distribution with a Noisy Laser
by Christian S. Jacobsen, Tobias Gehring and Ulrik L. Andersen
Entropy 2015, 17(7), 4654-4663; https://doi.org/10.3390/e17074654 - 03 Jul 2015
Cited by 16 | Viewed by 5257 | Correction
Abstract
Existing experimental implementations of continuous-variable quantum key distribution require shot-noise limited operation, achieved with shot-noise limited lasers. However, loosening this requirement on the laser source would allow for cheaper, potentially integrated systems. Here, we implement a theoretically proposed prepare-and-measure continuous-variable protocol and experimentally [...] Read more.
Existing experimental implementations of continuous-variable quantum key distribution require shot-noise limited operation, achieved with shot-noise limited lasers. However, loosening this requirement on the laser source would allow for cheaper, potentially integrated systems. Here, we implement a theoretically proposed prepare-and-measure continuous-variable protocol and experimentally demonstrate the robustness of it against preparation noise stemming for instance from technical laser noise. Provided that direct reconciliation techniques are used in the post-processing we show that for small distances large amounts of preparation noise can be tolerated in contrast to reverse reconciliation where the key rate quickly drops to zero. Our experiment thereby demonstrates that quantum key distribution with non-shot-noise limited laser diodes might be feasible. Full article
(This article belongs to the Special Issue Quantum Cryptography)
Show Figures

3093 KiB  
Article
Quantifying Redundant Information in Predicting a Target Random Variable
by Virgil Griffith and Tracey Ho
Entropy 2015, 17(7), 4644-4653; https://doi.org/10.3390/e17074644 - 02 Jul 2015
Cited by 19 | Viewed by 5415
Abstract
We consider the problem of defining a measure of redundant information that quantifies how much common information two or more random variables specify about a target random variable. We discussed desired properties of such a measure, and propose new measures with some desirable [...] Read more.
We consider the problem of defining a measure of redundant information that quantifies how much common information two or more random variables specify about a target random variable. We discussed desired properties of such a measure, and propose new measures with some desirable properties. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Show Figures

1470 KiB  
Article
Differentiating Interictal and Ictal States in Childhood Absence Epilepsy through Permutation Rényi Entropy
by Nadia Mammone, Jonas Duun-Henriksen, Troels W. Kjaer and Francesco C. Morabito
Entropy 2015, 17(7), 4627-4643; https://doi.org/10.3390/e17074627 - 02 Jul 2015
Cited by 42 | Viewed by 6959
Abstract
Permutation entropy (PE) has been widely exploited to measure the complexity of the electroencephalogram (EEG), especially when complexity is linked to diagnostic information embedded in the EEG. Recently, the authors proposed a spatial-temporal analysis of the EEG recordings of absence epilepsy patients based [...] Read more.
Permutation entropy (PE) has been widely exploited to measure the complexity of the electroencephalogram (EEG), especially when complexity is linked to diagnostic information embedded in the EEG. Recently, the authors proposed a spatial-temporal analysis of the EEG recordings of absence epilepsy patients based on PE. The goal here is to improve the ability of PE in discriminating interictal states from ictal states in absence seizure EEG. For this purpose, a parametrical definition of permutation entropy is introduced here in the field of epileptic EEG analysis: the permutation Rényi entropy (PEr). PEr has been extensively tested against PE by tuning the involved parameters (order, delay time and alpha). The achieved results demonstrate that PEr outperforms PE, as there is a statistically-significant, wider gap between the PEr levels during the interictal states and PEr levels observed in the ictal states compared to PE. PEr also outperformed PE as the input to a classifier aimed at discriminating interictal from ictal states. Full article
(This article belongs to the Special Issue Entropy in Human Brain Networks)
Show Figures

2403 KiB  
Article
A New Robust Regression Method Based on Minimization of Geodesic Distances on a Probabilistic Manifold: Application to Power Laws
by Geert Verdoolaege
Entropy 2015, 17(7), 4602-4626; https://doi.org/10.3390/e17074602 - 01 Jul 2015
Cited by 10 | Viewed by 5337
Abstract
In regression analysis for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. In many situations, the assumptions underlying OLS are not fulfilled, and several other [...] Read more.
In regression analysis for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. In many situations, the assumptions underlying OLS are not fulfilled, and several other approaches have been proposed. However, most techniques address only part of the shortcomings of OLS. We here discuss a new and more general regression method, which we call geodesic least squares regression (GLS). The method is based on minimization of the Rao geodesic distance on a probabilistic manifold. For the case of a power law, we demonstrate the robustness of the method on synthetic data in the presence of significant uncertainty on both the data and the regression model. We then show good performance of the method in an application to a scaling law in magnetic confinement fusion. Full article
(This article belongs to the Special Issue Information, Entropy and Their Geometric Structures)
Show Figures

522 KiB  
Article
Applications of the Fuzzy Sumudu Transform for the Solution of First Order Fuzzy Differential Equations
by Norazrizal Aswad Abdul Rahman and Muhammad Zaini Ahmad
Entropy 2015, 17(7), 4582-4601; https://doi.org/10.3390/e17074582 - 01 Jul 2015
Cited by 18 | Viewed by 5285
Abstract
In this paper, we study the classical Sumudu transform in fuzzy environment, referred to as the fuzzy Sumudu transform (FST). We also propose some results on the properties of the FST, such as linearity, preserving, fuzzy derivative, shifting and convolution theorem. In order [...] Read more.
In this paper, we study the classical Sumudu transform in fuzzy environment, referred to as the fuzzy Sumudu transform (FST). We also propose some results on the properties of the FST, such as linearity, preserving, fuzzy derivative, shifting and convolution theorem. In order to show the capability of the FST, we provide a detailed procedure to solve fuzzy differential equations (FDEs). A numerical example is provided to illustrate the usage of the FST. Full article
(This article belongs to the Special Issue Dynamical Equations and Causal Structures from Observations)
Show Figures

Previous Issue
Next Issue
Back to TopTop