entropy-logo

Journal Browser

Journal Browser

Maximum Entropy and Its Application II

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (31 March 2017) | Viewed by 62088

Special Issue Editor


E-Mail Website
Guest Editor
Department of Statistics and Applied Probability, University of California, Santa Barbara, CA 93106-3110, USA
Interests: Bayesian networks; machine learning; data mining; knowledge discovery; the foundations of Bayesianism
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The field of entropy-related research has been particularly fruitful in the past few decades, and continues to produce important results in a range of scientific areas, including thermal engineering, quantum communications, and wildlife research. Contributions to this Special Issue are welcome from both theoretical and applied perspectives of entropy, including papers addressing conceptual and methodological developments, as well as new applications of entropy and information theory. Foundational issues involving probability theory and information theory, and inference and inquiry are also of keen interest as there are yet many open questions.

Dr. Dawn E. Holmes
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 3431 KiB  
Article
Global Reliability Sensitivity Analysis Based on Maximum Entropy and 2-Layer Polynomial Chaos Expansion
by Jianyu Zhao, Shengkui Zeng, Jianbin Guo and Shaohua Du
Entropy 2018, 20(3), 202; https://doi.org/10.3390/e20030202 - 16 Mar 2018
Cited by 6 | Viewed by 3572
Abstract
To optimize contributions of uncertain input variables on the statistical parameter of given model, e.g., reliability, global reliability sensitivity analysis (GRSA) provides an appropriate tool to quantify the effects. However, it may be difficult to calculate global reliability sensitivity indices compared with the [...] Read more.
To optimize contributions of uncertain input variables on the statistical parameter of given model, e.g., reliability, global reliability sensitivity analysis (GRSA) provides an appropriate tool to quantify the effects. However, it may be difficult to calculate global reliability sensitivity indices compared with the traditional global sensitivity indices of model output, because statistical parameters are more difficult to obtain, Monte Carlo simulation (MCS)-related methods seem to be the only ways for GRSA but they are usually computationally demanding. This paper presents a new non-MCS calculation to evaluate global reliability sensitivity indices. This method proposes: (i) a 2-layer polynomial chaos expansion (PCE) framework to solve the global reliability sensitivity indices; and (ii) an efficient method to build a surrogate model of the statistical parameter using the maximum entropy (ME) method with the moments provided by PCE. This method has a dramatically reduced computational cost compared with traditional approaches. Two examples are introduced to demonstrate the efficiency and accuracy of the proposed method. It also suggests that the important ranking of model output and associated failure probability may be different, which could help improve the understanding of the given model in further optimization design. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

2740 KiB  
Article
An Enhanced Set-Membership PNLMS Algorithm with a Correntropy Induced Metric Constraint for Acoustic Channel Estimation
by Zhan Jin, Yingsong Li and Yanyan Wang
Entropy 2017, 19(6), 281; https://doi.org/10.3390/e19060281 - 15 Jun 2017
Cited by 15 | Viewed by 4042
Abstract
In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel [...] Read more.
In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel framework. The proposed CIM penalized SM-PNLMS (CIMSM-PNLMS) algorithm is derived and analyzed in detail. A desired zero attraction term is put forward in the updating equation of the proposed CIMSM-PNLMS algorithm to force the inactive coefficients to zero. The performance of the proposed CIMSM-PNLMS algorithm is investigated for estimating an underwater communication channel estimation and an echo channel. The obtained results demonstrate that the proposed CIMSM-PNLMS algorithm converges faster and provides a smaller estimation error in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS and zero-attracting SM-PNLMS (ZASM-PNLMS) algorithms. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

2722 KiB  
Article
Joint Characteristic Timescales and Entropy Production Analyses for Model Reduction of Combustion Systems
by Sylvia Porras, Viatcheslav Bykov, Vladimir Gol’dshtein and Ulrich Maas
Entropy 2017, 19(6), 264; https://doi.org/10.3390/e19060264 - 09 Jun 2017
Cited by 9 | Viewed by 3990
Abstract
The reduction of chemical kinetics describing combustion processes remains one of the major topics in the combustion theory and its applications. Problems concerning the estimation of reaction mechanisms real dimension remain unsolved, this being a critical point in the development of reduction models. [...] Read more.
The reduction of chemical kinetics describing combustion processes remains one of the major topics in the combustion theory and its applications. Problems concerning the estimation of reaction mechanisms real dimension remain unsolved, this being a critical point in the development of reduction models. In this study, we suggest a combination of local timescale and entropy production analyses to cope with this problem. In particular, the framework of skeletal mechanism is in the focus of the study as a practical and most straightforward implementation strategy for reduced mechanisms. Hydrogen and methane/dimethyl ether reaction mechanisms are considered for illustration and validation purposes. Two skeletal mechanism versions were obtained for methane/dimethyl ether combustion system by varying the tolerance used to identify important reactions in the characteristic timescale analysis of the system. Comparisons of ignition delay times and species profiles calculated with the detailed and the reduced models are presented. The results of the application show transparently the potential of the suggested approach to be automatically implemented for the reduction of large chemical kinetic models. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

277 KiB  
Article
Improving the Naive Bayes Classifier via a Quick Variable Selection Method Using Maximum of Entropy
by Joaquín Abellán and Javier G. Castellano
Entropy 2017, 19(6), 247; https://doi.org/10.3390/e19060247 - 25 May 2017
Cited by 37 | Viewed by 7874
Abstract
Variable selection methods play an important role in the field of attribute mining. The Naive Bayes (NB) classifier is a very simple and popular classification method that yields good results in a short processing time. Hence, it is a very appropriate classifier for [...] Read more.
Variable selection methods play an important role in the field of attribute mining. The Naive Bayes (NB) classifier is a very simple and popular classification method that yields good results in a short processing time. Hence, it is a very appropriate classifier for very large datasets. The method has a high dependence on the relationships between the variables. The Info-Gain (IG) measure, which is based on general entropy, can be used as a quick variable selection method. This measure ranks the importance of the attribute variables on a variable under study via the information obtained from a dataset. The main drawback is that it is always non-negative and it requires setting the information threshold to select the set of most important variables for each dataset. We introduce here a new quick variable selection method that generalizes the method based on the Info-Gain measure. It uses imprecise probabilities and the maximum entropy measure to select the most informative variables without setting a threshold. This new variable selection method, combined with the Naive Bayes classifier, improves the original method and provides a valuable tool for handling datasets with a very large number of features and a huge amount of data, where more complex methods are not computationally feasible. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

906 KiB  
Article
Measures of Qualitative Variation in the Case of Maximum Entropy
by Atif Evren and Erhan Ustaoğlu
Entropy 2017, 19(5), 204; https://doi.org/10.3390/e19050204 - 04 May 2017
Cited by 5 | Viewed by 4546
Abstract
Asymptotic behavior of qualitative variation statistics, including entropy measures, can be modeled well by normal distributions. In this study, we test the normality of various qualitative variation measures in general. We find that almost all indices tend to normality as the sample size [...] Read more.
Asymptotic behavior of qualitative variation statistics, including entropy measures, can be modeled well by normal distributions. In this study, we test the normality of various qualitative variation measures in general. We find that almost all indices tend to normality as the sample size increases, and they are highly correlated. However, for all of these qualitative variation statistics, maximum uncertainty is a serious factor that prevents normality. Among these, we study the properties of two qualitative variation statistics; VarNC and StDev statistics in the case of maximum uncertainty, since these two statistics show lower sampling variability and utilize all sample information. We derive probability distribution functions of these statistics and prove that they are consistent. We also discuss the relationship between VarNC and the normalized form of Tsallis (α = 2) entropy in the case of maximum uncertainty. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

1300 KiB  
Article
Entropy “2”-Soft Classification of Objects
by Yuri S. Popkov, Zeev Volkovich, Yuri A. Dubnov, Renata Avros and Elena Ravve
Entropy 2017, 19(4), 178; https://doi.org/10.3390/e19040178 - 20 Apr 2017
Cited by 4 | Viewed by 3989
Abstract
A proposal for a new method of classification of objects of various nature, named “2”-soft classification, which allows for referring objects to one of two types with optimal entropy probability for available collection of learning data with consideration of additive errors therein. A [...] Read more.
A proposal for a new method of classification of objects of various nature, named “2”-soft classification, which allows for referring objects to one of two types with optimal entropy probability for available collection of learning data with consideration of additive errors therein. A decision rule of randomized parameters and probability density function (PDF) is formed, which is determined by the solution of the problem of the functional entropy linear programming. A procedure for “2”-soft classification is developed, consisting of the computer simulation of the randomized decision rule with optimal entropy PDF parameters. Examples are provided. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

993 KiB  
Article
Is Turbulence a State of Maximum Energy Dissipation?
by Martin Mihelich, Davide Faranda, Didier Paillard and Bérengère Dubrulle
Entropy 2017, 19(4), 154; https://doi.org/10.3390/e19040154 - 31 Mar 2017
Cited by 9 | Viewed by 6542
Abstract
Turbulent flows are known to enhance turbulent transport. It has then even been suggested that turbulence is a state of maximum energy dissipation. In this paper, we re-examine critically this suggestion in light of several recent works around the Maximum Entropy Production [...] Read more.
Turbulent flows are known to enhance turbulent transport. It has then even been suggested that turbulence is a state of maximum energy dissipation. In this paper, we re-examine critically this suggestion in light of several recent works around the Maximum Entropy Production principle (MEP) that has been used in several out-of-equilibrium systems. We provide a set of four different optimization principles, based on maximization of energy dissipation, entropy production, Kolmogorov–Sinai entropy and minimization of mixing time, and study the connection between these principles using simple out-of-equilibrium models describing mixing of a scalar quantity. We find that there is a chained-relationship between most probable stationary states of the system, and their ability to obey one of the four principles. This provides an empirical justification of the Maximum Entropy Production principle in this class of systems, including some turbulent flows, for special boundary conditions. Otherwise, we claim that the minimization of the mixing time would be a more appropriate principle. We stress that this principle might actually be limited to flows where symmetry or dynamics impose pure mixing of a quantity (like angular momentum, momentum or temperature). The claim that turbulence is a state of maximum energy dissipation, a quantity intimately related to entropy production, is therefore limited to special situations that nevertheless include classical systems such as shear flows, Rayleigh–Bénard convection and von Kármán flows, forced with constant velocity or temperature conditions. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

1478 KiB  
Article
Impact Location and Quantification on an Aluminum Sandwich Panel Using Principal Component Analysis and Linear Approximation with Maximum Entropy
by Viviana Meruane, Pablo Véliz, Enrique López Droguett and Alejandro Ortiz-Bernardin
Entropy 2017, 19(4), 137; https://doi.org/10.3390/e19040137 - 25 Mar 2017
Cited by 5 | Viewed by 4159
Abstract
To avoid structural failures it is of critical importance to detect, locate and quantify impact damage as soon as it occurs. This can be achieved by impact identification methodologies, which continuously monitor the structure, detecting, locating, and quantifying impacts as they occur. This [...] Read more.
To avoid structural failures it is of critical importance to detect, locate and quantify impact damage as soon as it occurs. This can be achieved by impact identification methodologies, which continuously monitor the structure, detecting, locating, and quantifying impacts as they occur. This article presents an improved impact identification algorithm that uses principal component analysis (PCA) to extract features from the monitored signals and an algorithm based on linear approximation with maximum entropy to estimate the impacts. The proposed methodology is validated with two experimental applications, which include an aluminum plate and an aluminum sandwich panel. The results are compared with those of other impact identification algorithms available in literature, demonstrating that the proposed method outperforms these algorithms. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

3660 KiB  
Article
Physical Intelligence and Thermodynamic Computing
by Robert L. Fry
Entropy 2017, 19(3), 107; https://doi.org/10.3390/e19030107 - 09 Mar 2017
Cited by 13 | Viewed by 8090
Abstract
This paper proposes that intelligent processes can be completely explained by thermodynamic principles. They can equally be described by information-theoretic principles that, from the standpoint of the required optimizations, are functionally equivalent. The underlying theory arises from two axioms regarding distinguishability and causality. [...] Read more.
This paper proposes that intelligent processes can be completely explained by thermodynamic principles. They can equally be described by information-theoretic principles that, from the standpoint of the required optimizations, are functionally equivalent. The underlying theory arises from two axioms regarding distinguishability and causality. Their consequence is a theory of computation that applies to the only two kinds of physical processes possible—those that reconstruct the past and those that control the future. Dissipative physical processes fall into the first class, whereas intelligent ones comprise the second. The first kind of process is exothermic and the latter is endothermic. Similarly, the first process dumps entropy and energy to its environment, whereas the second reduces entropy while requiring energy to operate. It is shown that high intelligence efficiency and high energy efficiency are synonymous. The theory suggests the usefulness of developing a new computing paradigm called Thermodynamic Computing to engineer intelligent processes. The described engineering formalism for the design of thermodynamic computers is a hybrid combination of information theory and thermodynamics. Elements of the engineering formalism are introduced in the reverse-engineer of a cortical neuron. The cortical neuron provides perhaps the simplest and most insightful example of a thermodynamic computer possible. It can be seen as a basic building block for constructing more intelligent thermodynamic circuits. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

9163 KiB  
Article
Entropy-Based Method for Evaluating Contact Strain-Energy Distribution for Assembly Accuracy Prediction
by Yan Fang, Xin Jin, Chencan Huang and Zhijing Zhang
Entropy 2017, 19(2), 49; https://doi.org/10.3390/e19020049 - 24 Jan 2017
Cited by 15 | Viewed by 4569
Abstract
Assembly accuracy significantly affects the performance of precision mechanical systems. In this study, an entropy-based evaluation method for contact strain-energy distribution is proposed to predict the assembly accuracy. Strain energy is utilized to characterize the effects of the combination of form errors and [...] Read more.
Assembly accuracy significantly affects the performance of precision mechanical systems. In this study, an entropy-based evaluation method for contact strain-energy distribution is proposed to predict the assembly accuracy. Strain energy is utilized to characterize the effects of the combination of form errors and contact deformations on the formation of assembly errors. To obtain the strain energy, the contact state is analyzed by applying the finite element method (FEM) on 3D, solid models of real parts containing form errors. Entropy is employed for evaluating the uniformity of the contact strain-energy distribution. An evaluation model, in which the uniformity of the contact strain-energy distribution is evaluated in three levels based on entropy, is developed to predict the assembly accuracy, and a comprehensive index is proposed. The assembly experiments for five sets of two rotating parts are conducted. Moreover, the coaxiality between the surfaces of two parts with assembly accuracy requirements is selected as the verification index to verify the effectiveness of the evaluation method. The results are in good agreement with the verification index, indicating that the method presented in this study is reliable and effective in predicting the assembly accuracy. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

1562 KiB  
Article
Angular Spectral Density and Information Entropy for Eddy Current Distribution
by Guolong Chen and Weimin Zhang
Entropy 2016, 18(11), 392; https://doi.org/10.3390/e18110392 - 10 Nov 2016
Cited by 8 | Viewed by 4434
Abstract
Here, a new method is proposed to quantitatively evaluate the eddy current distribution induced by different exciting coils of an eddy current probe. Probability of energy allocation of a vector field is modeled via conservation of energy and imitating the wave function in [...] Read more.
Here, a new method is proposed to quantitatively evaluate the eddy current distribution induced by different exciting coils of an eddy current probe. Probability of energy allocation of a vector field is modeled via conservation of energy and imitating the wave function in quantum mechanics. The idea of quantization and the principle of circuit sampling is utilized to discretize the space of the vector field. Then, a method to calculate angular spectral density and Shannon information entropy is proposed. Eddy current induced by three different exciting coils is evaluated with this method, and the specific nature of eddy current testing is discussed. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

3004 KiB  
Article
A Robust Sparse Adaptive Filtering Algorithm with a Correntropy Induced Metric Constraint for Broadband Multi-Path Channel Estimation
by Yingsong Li, Zhan Jin, Yanyan Wang and Rui Yang
Entropy 2016, 18(10), 380; https://doi.org/10.3390/e18100380 - 24 Oct 2016
Cited by 23 | Viewed by 5531
Abstract
A robust sparse least-mean mixture-norm (LMMN) algorithm is proposed, and its performance is appraised in the context of estimating a broadband multi-path wireless channel. The proposed algorithm is implemented via integrating a correntropy-induced metric (CIM) penalty into the conventional LMMN algorithm to modify [...] Read more.
A robust sparse least-mean mixture-norm (LMMN) algorithm is proposed, and its performance is appraised in the context of estimating a broadband multi-path wireless channel. The proposed algorithm is implemented via integrating a correntropy-induced metric (CIM) penalty into the conventional LMMN algorithm to modify the basic cost function, which is denoted as the CIM-based LMMN (CIM-LMMN) algorithm. The proposed CIM-LMMN algorithm is derived in detail within the kernel framework. The updating equation of CIM-LMMN can provide a zero attractor to attract the non-dominant channel coefficients to zeros, and it also gives a tradeoff between the sparsity and the estimation misalignment. Moreover, the channel estimation behavior is investigated over a broadband sparse multi-path wireless channel, and the simulation results are compared with the least mean square/fourth (LMS/F), least mean square (LMS), least mean fourth (LMF) and the recently-developed sparse channel estimation algorithms. The channel estimation performance obtained from the designated sparse channel estimation demonstrates that the CIM-LMMN algorithm outperforms the recently-developed sparse LMMN algorithms and the relevant sparse channel estimation algorithms. From the results, we can see that our CIM-LMMN algorithm is robust and is superior to these mentioned algorithms in terms of both the convergence speed rate and the channel estimation misalignment for estimating a sparse channel. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Show Figures

Figure 1

Back to TopTop