Previous Issue
Volume 26, November
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 26, Issue 12 (December 2024) – 123 articles

Cover Story (view full-size image): Kinetic theory refers to the physical and mathematical approaches to a systematic deduction of the macroscopic behavior of many-particle systems from first principles, that is, starting from the microscopic equations of motion. This work presents kinetic theory based upon the Landau equation, describing the one-particle function in a weak interaction limit for self-propelled particles with alignment interactions. Self-propelled particles—driven units that break the conservation of momentum on a particle scale—are the epitome of active matter. This paper illustrates such particles that interact with nematic symmetry. The relevant equations can be brought into a diagrammatic form (first three orders are shown), allowing us to quantitatively extract accurate predictions for agent-based simulations. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12 pages, 268 KiB  
Article
Problem of Existence of Joint Distribution on Quantum Logic
by Oľga Nánásiová, Karla Čipková and Michal Zákopčan
Entropy 2024, 26(12), 1121; https://doi.org/10.3390/e26121121 (registering DOI) - 21 Dec 2024
Abstract
This paper deals with the topics of modeling joint distributions on a generalized probability space. An algebraic structure known as quantum logic is taken as the basic model. There is a brief summary of some earlier published findings concerning a function s-map, [...] Read more.
This paper deals with the topics of modeling joint distributions on a generalized probability space. An algebraic structure known as quantum logic is taken as the basic model. There is a brief summary of some earlier published findings concerning a function s-map, which is a mathematical tool suitable for constructing virtual joint probabilities of even non-compatible propositions. The paper completes conclusions published in 2020 and extends the results for three or more random variables if the marginal distributions are known. The existence of a (n+1)-variate joint distribution is shown in special cases when the quantum logic consists of at most n blocks of Boolean algebras. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness V)
11 pages, 801 KiB  
Article
Computing Entropy for Long-Chain Alkanes Using Linear Regression: Application to Hydroisomerization
by Shrinjay Sharma, Richard Baur, Marcello Rigutto, Erik Zuidema, Umang Agarwal, Sofia Calero, David Dubbeldam and Thijs J. H. Vlugt
Entropy 2024, 26(12), 1120; https://doi.org/10.3390/e26121120 (registering DOI) - 21 Dec 2024
Abstract
Entropies for alkane isomers longer than C10 are computed using our recently developed linear regression model for thermochemical properties which is based on second-order group contributions. The computed entropies show excellent agreement with experimental data and data from Scott’s tables which are [...] Read more.
Entropies for alkane isomers longer than C10 are computed using our recently developed linear regression model for thermochemical properties which is based on second-order group contributions. The computed entropies show excellent agreement with experimental data and data from Scott’s tables which are obtained from a statistical mechanics-based correlation. Entropy production and heat input are calculated for the hydroisomerization of C7 isomers in various zeolites (FAU-, ITQ-29-, BEA-, MEL-, MFI-, MTW-, and MRE-types) at 500 K at chemical equilibrium. Small variations in these properties are observed because of the differences in reaction equilibrium distributions for these zeolites. The effect of chain length on heat input and entropy production is also studied for the hydroisomerization of C7, C8, C10, and C14 isomers in MTW-type zeolite at 500 K. For longer chains, both heat input and entropy production increase. Enthalpies and absolute entropies of C7 hydroisomerization reaction products in MTW-type zeolite increase with higher temperatures. These findings highlight the accuracy of our linear regression model in computing entropies for alkanes and provide insight for designing and optimizing zeolite-catalyzed hydroisomerization processes. Full article
21 pages, 10953 KiB  
Review
Machine Learning Advances in High-Entropy Alloys: A Mini-Review
by Yibo Sun and Jun Ni
Entropy 2024, 26(12), 1119; https://doi.org/10.3390/e26121119 (registering DOI) - 20 Dec 2024
Abstract
The efficacy of machine learning has increased exponentially over the past decade. The utilization of machine learning to predict and design materials has become a pivotal tool for accelerating materials development. High-entropy alloys are particularly intriguing candidates for exemplifying the potency of machine [...] Read more.
The efficacy of machine learning has increased exponentially over the past decade. The utilization of machine learning to predict and design materials has become a pivotal tool for accelerating materials development. High-entropy alloys are particularly intriguing candidates for exemplifying the potency of machine learning due to their superior mechanical properties, vast compositional space, and intricate chemical interactions. This review examines the general process of developing machine learning models. The advances and new algorithms of machine learning in the field of high-entropy alloys are presented in each part of the process. These advances are based on both improvements in computer algorithms and physical representations that focus on the unique ordering properties of high-entropy alloys. We also show the results of generative models, data augmentation, and transfer learning in high-entropy alloys and conclude with a summary of the challenges still faced in machine learning high-entropy alloys today. Full article
16 pages, 1620 KiB  
Article
EXIT Charts for Low-Density Algebra-Check Codes
by Zuo Tang, Jing Lei and Ying Huang
Entropy 2024, 26(12), 1118; https://doi.org/10.3390/e26121118 (registering DOI) - 20 Dec 2024
Abstract
This paper focuses on the Low-Density Algebra-Check (LDAC) code, a novel low-rate channel code derived from the Low-Density Parity-Check (LDPC) code with expanded algebra-check constraints. A method for optimizing LDAC code design using Extrinsic Information Transfer (EXIT) charts is presented. Firstly, an iterative [...] Read more.
This paper focuses on the Low-Density Algebra-Check (LDAC) code, a novel low-rate channel code derived from the Low-Density Parity-Check (LDPC) code with expanded algebra-check constraints. A method for optimizing LDAC code design using Extrinsic Information Transfer (EXIT) charts is presented. Firstly, an iterative decoding model for LDAC is established according to its structure, and a method for plotting EXIT curves of the algebra-check node decoder is proposed. Then, the performance of two types of algebra-check nodes under different conditions is analyzed via EXIT curves. Finally, a low-rate LDAC code with enhanced coding gain is constructed, demonstrating the effectiveness of the proposed method. Full article
21 pages, 5924 KiB  
Article
Parallel Bayesian Optimization of Thermophysical Properties of Low Thermal Conductivity Materials Using the Transient Plane Source Method in the Body-Fitted Coordinate
by Huijuan Su, Jianye Kang, Yan Li, Mingxin Lyu, Yanhua Lai and Zhen Dong
Entropy 2024, 26(12), 1117; https://doi.org/10.3390/e26121117 - 20 Dec 2024
Abstract
The transient plane source (TPS) method heat transfer model was established. A body-fitted coordinate system is proposed to transform the unstructured grid structure to improve the speed of solving the heat transfer direct problem of the winding probe. A parallel Bayesian optimization algorithm [...] Read more.
The transient plane source (TPS) method heat transfer model was established. A body-fitted coordinate system is proposed to transform the unstructured grid structure to improve the speed of solving the heat transfer direct problem of the winding probe. A parallel Bayesian optimization algorithm based on a multi-objective hybrid strategy (MHS) is proposed based on an inverse problem. The efficiency of the thermophysical properties inversion was improved. The results show that the meshing method of 30° is the best. The transformation of body-fitted mesh is related to the orthogonality and density of the mesh. Compared with parameter inversion the computational fluid dynamics (CFD) software, the absolute values of the relative deviations of different materials are less than 0.03%. The calculation speeds of the body-fitted grid program are more than 36% and 91% higher than those of the CFD and self-developed unstructured mesh programs, respectively. The application of body-fitted coordinate system effectively improves the calculation speed of the TPS method. The MHS is more competitive than other algorithms in parallel mode, both in terms of accuracy and speed. The accuracy of the inversion is less affected by the number of initial samples, time range, and parallel points. The number of parallel points increased from 2 to 6, reducing the computation time by 66.6%. Adding parallel points effectively accelerates the convergence of algorithms. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

17 pages, 1972 KiB  
Article
A DNA Data Storage Method Using Spatial Encoding Based Lossless Compression
by Esra Şatır
Entropy 2024, 26(12), 1116; https://doi.org/10.3390/e26121116 (registering DOI) - 20 Dec 2024
Abstract
With the rapid increase in global data and rapid development of information technology, DNA sequences have been collected and manipulated on computers. This has yielded a new and attractive field of bioinformatics, DNA storage, where DNA has been considered as a great potential [...] Read more.
With the rapid increase in global data and rapid development of information technology, DNA sequences have been collected and manipulated on computers. This has yielded a new and attractive field of bioinformatics, DNA storage, where DNA has been considered as a great potential storage medium. It is known that one gram of DNA can store 215 GB of data, and the data stored in the DNA can be preserved for tens of thousands of years. In this study, a lossless and reversible DNA data storage method was proposed. The proposed approach employs a vector representation of each DNA base in a two-dimensional (2D) spatial domain for both encoding and decoding. The structure of the proposed method is reversible, rendering the decompression procedure possible. Experiments were performed to investigate the capacity, compression ratio, stability, and reliability. The obtained results show that the proposed method is much more efficient in terms of capacity than other known algorithms in the literature. Full article
(This article belongs to the Special Issue Coding and Algorithms for DNA-Based Data Storage Systems)
Show Figures

Figure 1

23 pages, 564 KiB  
Article
Lossless Image Compression Using Context-Dependent Linear Prediction Based on Mean Absolute Error Minimization
by Grzegorz Ulacha and Mirosław Łazoryszczak
Entropy 2024, 26(12), 1115; https://doi.org/10.3390/e26121115 - 20 Dec 2024
Abstract
This paper presents a method for lossless compression of images with fast decoding time and the option to select encoder parameters for individual image characteristics to increase compression efficiency. The data modeling stage was based on linear and nonlinear prediction, which was complemented [...] Read more.
This paper presents a method for lossless compression of images with fast decoding time and the option to select encoder parameters for individual image characteristics to increase compression efficiency. The data modeling stage was based on linear and nonlinear prediction, which was complemented by a simple block for removing the context-dependent constant component. The prediction was based on the Iterative Reweighted Least Squares (IRLS) method which allowed the minimization of mean absolute error. Two-stage compression was used to encode prediction errors: an adaptive Golomb and a binary arithmetic coding. High compression efficiency was achieved by using an author’s context-switching algorithm, which allows several prediction models tailored to the individual characteristics of each image area. In addition, an analysis of the impact of individual encoder parameters on efficiency and encoding time was conducted, and the efficiency of the proposed solution was shown against competing solutions, showing a 9.1% improvement in the bit average of files for the entire test base compared to JPEG-LS. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

36 pages, 2037 KiB  
Article
Contextual Fine-Tuning of Language Models with Classifier-Driven Content Moderation for Text Generation
by Matan Punnaivanam and Palani Velvizhy
Entropy 2024, 26(12), 1114; https://doi.org/10.3390/e26121114 - 20 Dec 2024
Abstract
In today’s digital age, ensuring the appropriateness of content for children is crucial for their cognitive and emotional development. The rise of automated text generation technologies, such as Large Language Models like LLaMA, Mistral, and Zephyr, has created a pressing need for effective [...] Read more.
In today’s digital age, ensuring the appropriateness of content for children is crucial for their cognitive and emotional development. The rise of automated text generation technologies, such as Large Language Models like LLaMA, Mistral, and Zephyr, has created a pressing need for effective tools to filter and classify suitable content. However, the existing methods often fail to effectively address the intricate details and unique characteristics of children’s literature. This study aims to bridge this gap by developing a robust framework that utilizes fine-tuned language models, classification techniques, and contextual story generation to generate and classify children’s stories based on their suitability. Employing a combination of fine-tuning techniques on models such as LLaMA, Mistral, and Zephyr, alongside a BERT-based classifier, we evaluated the generated stories against established metrics like ROUGE, METEOR, and BERT Scores. The fine-tuned Mistral-7B model achieved a ROUGE-1 score of 0.4785, significantly higher than the base model’s 0.3185, while Zephyr-7B-Beta achieved a METEOR score of 0.4154 compared to its base counterpart’s score of 0.3602. The results indicated that the fine-tuned models outperformed base models, generating content more aligned with human standards. Moreover, the BERT Classifier exhibited high precision (0.95) and recall (0.97) for identifying unsuitable content, further enhancing the reliability of content classification. These findings highlight the potential of advanced language models in generating age-appropriate stories and enhancing content moderation strategies. This research has broader implications for educational technology, content curation, and parental control systems, offering a scalable approach to ensuring children’s exposure to safe and enriching narratives. Full article
Show Figures

Figure 1

17 pages, 6381 KiB  
Article
Sample Augmentation Using Enhanced Auxiliary Classifier Generative Adversarial Network by Transformer for Railway Freight Train Wheelset Bearing Fault Diagnosis
by Jing Zhao, Junfeng Li, Zonghao Yuan, Tianming Mu, Zengqiang Ma and Suyan Liu
Entropy 2024, 26(12), 1113; https://doi.org/10.3390/e26121113 - 20 Dec 2024
Abstract
Diagnosing faults in wheelset bearings is critical for train safety. The main challenge is that only a limited amount of fault sample data can be obtained during high-speed train operations. This scarcity of samples impacts the training and accuracy of deep learning models [...] Read more.
Diagnosing faults in wheelset bearings is critical for train safety. The main challenge is that only a limited amount of fault sample data can be obtained during high-speed train operations. This scarcity of samples impacts the training and accuracy of deep learning models for wheelset bearing fault diagnosis. Studies show that the Auxiliary Classifier Generative Adversarial Network (ACGAN) demonstrates promising performance in addressing this issue. However, existing ACGAN models have drawbacks such as complexity, high computational expenses, mode collapse, and vanishing gradients. Aiming to address these issues, this paper presents the Transformer and Auxiliary Classifier Generative Adversarial Network (TACGAN), which increases the diversity, complexity and entropy of generated samples, and maximizes the entropy of the generated samples. The transformer network replaces traditional convolutional neural networks (CNNs), avoiding iterative and convolutional structures, thereby reducing computational expenses. Moreover, an independent classifier is integrated to prevent the coupling problem, where the discriminator is simultaneously identified and classified in the ACGAN. Finally, the Wasserstein distance is employed in the loss function to mitigate mode collapse and vanishing gradients. Experimental results using the train wheelset bearing datasets demonstrate the accuracy and effectiveness of the TACGAN. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 1215 KiB  
Article
Network Coding-Enhanced Polar Codes for Relay-Assisted Visible Light Communication Systems
by Congduan Li, Mingyang Zhong, Yiqian Zhang, Dan Song, Nanfeng Zhang and Jingfeng Yang
Entropy 2024, 26(12), 1112; https://doi.org/10.3390/e26121112 - 19 Dec 2024
Abstract
This paper proposes a novel polar coding scheme tailored for indoor visible light communication (VLC) systems. Simulation results demonstrate a significant reduction in bit error rate (BER) compared to uncoded transmission, with a coding gain of at least 5 dB. Furthermore, the reliable [...] Read more.
This paper proposes a novel polar coding scheme tailored for indoor visible light communication (VLC) systems. Simulation results demonstrate a significant reduction in bit error rate (BER) compared to uncoded transmission, with a coding gain of at least 5 dB. Furthermore, the reliable communication area of the VLC system is substantially extended. Building on this foundation, this study explores the joint design of polar codes and physical-layer network coding (PNC) for VLC systems. Simulation results illustrate that the BER of our scheme closely approaches that of the conventional VLC relay scheme. Moreover, our approach doubles the throughput, cuts equipment expenses in half, and boosts effective bit rates per unit time-slot twofold. This proposed design noticeably advances the performance of VLC systems and is particularly well-suited for scenarios with low-latency demands. Full article
(This article belongs to the Special Issue Advances in Modern Channel Coding)
Show Figures

Figure 1

14 pages, 290 KiB  
Article
Bayesian Assessment of Corrosion-Related Failures in Steel Pipelines
by Fabrizio Ruggeri, Enrico Cagno, Franco Caron, Mauro Mancini and Antonio Pievatolo
Entropy 2024, 26(12), 1111; https://doi.org/10.3390/e26121111 - 19 Dec 2024
Viewed by 44
Abstract
The probability of gas escapes from steel pipelines due to different types of corrosion is studied with real failure data from an urban gas distribution network. Both the design and maintenance of the network are considered, identifying and estimating (in a Bayesian framework) [...] Read more.
The probability of gas escapes from steel pipelines due to different types of corrosion is studied with real failure data from an urban gas distribution network. Both the design and maintenance of the network are considered, identifying and estimating (in a Bayesian framework) an elementary multinomial model in the first case, and a more sophisticated non-homogeneous Poisson process in the second case. Special attention is paid to the elicitation of the experts’ opinions. We conclude that the corrosion process behaves quite differently depending on the type of corrosion, and that, in most cases, cathodically protected pipes should be installed. Full article
(This article belongs to the Special Issue Bayesianism)
Show Figures

Figure 1

20 pages, 5238 KiB  
Article
A Novel Video Compression Approach Based on Two-Stage Learning
by Dan Shao, Ning Wang, Pu Chen, Yu Liu and Lin Lin
Entropy 2024, 26(12), 1110; https://doi.org/10.3390/e26121110 - 19 Dec 2024
Viewed by 78
Abstract
In recent years, the rapid growth of video data posed challenges for storage and transmission. Video compression techniques provided a viable solution to this problem. In this study, we proposed a bidirectional coding video compression model named DeepBiVC, which was based on two-stage [...] Read more.
In recent years, the rapid growth of video data posed challenges for storage and transmission. Video compression techniques provided a viable solution to this problem. In this study, we proposed a bidirectional coding video compression model named DeepBiVC, which was based on two-stage learning. Firstly, we conducted preprocessing on the video data by segmenting the video flow into groups of continuous image frames, with each group comprising five frames. Then, in the first stage, we developed an image compression module based on an invertible neural network (INN) model to compress the first and last frames of each group. In the second stage, we designed a video compression module that compressed the intermediate frames using bidirectional optical flow estimation. Experimental results indicated that DeepBiVC outperformed other state-of-the-art video compression methods regarding PSNR and MS-SSIM metrics. Specifically, on the VUG dataset at bpp = 0.3, DeepBiVC achieved a PSNR of 37.16 and an MS-SSIM of 0.98. Full article
(This article belongs to the Special Issue Information Theory and Coding for Image/Video Processing)
Show Figures

Figure 1

17 pages, 4585 KiB  
Article
Effects of Temperature and Random Forces in Phase Transformation of Multi-Stable Systems
by Giuseppe Florio, Stefano Giordano and Giuseppe Puglisi
Entropy 2024, 26(12), 1109; https://doi.org/10.3390/e26121109 - 18 Dec 2024
Viewed by 240
Abstract
Multi-stable behavior at the microscopic length-scale is fundamental for phase transformation phenomena observed in many materials. These phenomena can be driven not only by external mechanical forces but are also crucially influenced by disorder and thermal fluctuations. Disorder, arising from structural defects or [...] Read more.
Multi-stable behavior at the microscopic length-scale is fundamental for phase transformation phenomena observed in many materials. These phenomena can be driven not only by external mechanical forces but are also crucially influenced by disorder and thermal fluctuations. Disorder, arising from structural defects or fluctuations in external stimuli, disrupts the homogeneity of the material and can significantly alter the system’s response, often leading to the suppression of cooperativity in the phase transition. Temperature can further introduce novel effects, modifying energy barriers and transition rates. The study of the effects of fluctuations requires the use of a framework that naturally incorporates the interaction of the system with the environment, such as Statistical Mechanics to account for the role of temperature. In the case of complex phenomena induced by disorder, advanced methods such as the replica method (to derive analytical formulas) or refined numerical methods based, for instance, on Monte Carlo techniques, may be needed. In particular, employing models that incorporate the main features of the physical system under investigation and allow for analytical results that can be compared with experimental data is of paramount importance for describing many realistic physical phenomena, which are often studied while neglecting the critical effect of randomness or by utilizing numerical techniques. Additionally, it is fundamental to efficiently derive the macroscopic material behavior from microscale properties, rather than relying solely on phenomenological approaches. In this perspective, we focus on a paradigmatic model that includes both nearest-neighbor interactions with multi-stable (elastic) energy terms and linear long-range interactions, capable of ensuring the presence of an ordered phase. Specifically, to study the effect of environmental noise on the control of the system, we include random fluctuation in external forces. We numerically analyze, on a small-size system, how the interplay of temperature and disorder can significantly alter the system’s phase transition behavior. Moreover, by mapping the model onto a modified version of the Random Field Ising Model, we utilize the replica method approach in the thermodynamic limit to justify the numerical results through analytical insights. Full article
Show Figures

Figure 1

19 pages, 2992 KiB  
Article
Asymmetric Cyclic Controlled Quantum Teleportation via Multiple-Qubit Entangled State in a Noisy Environment
by Hanxuan Zhou
Entropy 2024, 26(12), 1108; https://doi.org/10.3390/e26121108 - 18 Dec 2024
Viewed by 206
Abstract
In this paper, by using eleven entangled quantum states as a quantum channel, we propose a cyclic and asymmetric novel protocol for four participants in which both Alice and Bob can transmit two-qubit states, and Charlie can transmit three-qubit states with the assistance [...] Read more.
In this paper, by using eleven entangled quantum states as a quantum channel, we propose a cyclic and asymmetric novel protocol for four participants in which both Alice and Bob can transmit two-qubit states, and Charlie can transmit three-qubit states with the assistance of the supervisor David, who provides a guarantee for communication security. This protocol is based on GHZ state measurement (GHZ), single-qubit measurement (SM), and unitary operations (UO) to implement the communication task. The analysis demonstrates that the success probability of the proposed protocol can reach 100%. Furthermore, considering that in actual production environments, it is difficult to avoid the occurrence of noise in quantum channels, this paper also analyzes the changes in fidelity in four types of noisy scenarios: bit-flip noise, phase-flip noise, bit-phase-flip noise, and depolarizing noise. Showing that communication quality only depends on the amplitude parameters of the initial state and decoherence rate. Additionally, we give a comparison with previous similar schemes in terms of achieved method and intrinsic efficiency, which illustrates the superiority of our protocol. Finally, in response to the vulnerability of quantum channels to external attacks, a security analysis was conducted, and corresponding defensive measures were proposed. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

15 pages, 726 KiB  
Article
W-Class States—Identification and Quantification of Bell-CHSH Inequalities’ Violation
by Joanna K. Kalaga, Wiesław Leoński and Jan Peřina, Jr.
Entropy 2024, 26(12), 1107; https://doi.org/10.3390/e26121107 - 18 Dec 2024
Viewed by 239
Abstract
We discuss a family of W-class states describing three-qubit systems. For such systems, we analyze the relations between the entanglement measures and the nonlocality parameter for a two-mode mixed state related to the two-qubit subsystem. We find the conditions determining the boundary values [...] Read more.
We discuss a family of W-class states describing three-qubit systems. For such systems, we analyze the relations between the entanglement measures and the nonlocality parameter for a two-mode mixed state related to the two-qubit subsystem. We find the conditions determining the boundary values of the negativity, parameterized by concurrence, for violating the Bell-CHSH inequality. Additionally, we derive the value ranges of the mixedness measure, parameterized by concurrence and negativity for the qubit–qubit mixed state, guaranteeing the violation and non-violation of the Bell-CHSH inequality. Full article
(This article belongs to the Special Issue Entropy in Classical and Quantum Information Theory with Applications)
Show Figures

Figure 1

16 pages, 297 KiB  
Article
The Boltzmann Equation and Its Place in the Edifice of Statistical Mechanics
by Charlotte Werndl and Roman Frigg
Entropy 2024, 26(12), 1106; https://doi.org/10.3390/e26121106 - 18 Dec 2024
Viewed by 227
Abstract
It is customary to classify approaches in statistical mechanics (SM) as belonging either to Boltzmanninan SM (BSM) or Gibbsian SM (GSM). It is, however, unclear how the Boltzmann equation (BE) fits into either of these approaches. To discuss the relation between BE and [...] Read more.
It is customary to classify approaches in statistical mechanics (SM) as belonging either to Boltzmanninan SM (BSM) or Gibbsian SM (GSM). It is, however, unclear how the Boltzmann equation (BE) fits into either of these approaches. To discuss the relation between BE and BSM, we first present a version of BSM that differs from standard presentation in that it uses local field variables to individuate macro-states, and we then show that BE is a special case of BSM thus understood. To discuss the relation between BE and GSM, we focus on the BBGKY hierarchy and note the version of the BE that follows from the hierarchy is “Gibbsian” only in the minimal sense that it operates with an invariant measure on the state space of the full system. Full article
(This article belongs to the Special Issue Time and Temporal Asymmetries)
13 pages, 370 KiB  
Article
Enumerating Finitary Processes
by Benjamin D. Johnson, James P. Crutchfield, Christopher J. Ellison and Carl S. McTague
Entropy 2024, 26(12), 1105; https://doi.org/10.3390/e26121105 - 17 Dec 2024
Viewed by 166
Abstract
We show how to efficiently enumerate a class of finite-memory stochastic processes using the causal representation of ϵ-machines. We characterize ϵ-machines in the language of automata theory and adapt a recent algorithm for generating accessible deterministic finite automata, pruning this over-large [...] Read more.
We show how to efficiently enumerate a class of finite-memory stochastic processes using the causal representation of ϵ-machines. We characterize ϵ-machines in the language of automata theory and adapt a recent algorithm for generating accessible deterministic finite automata, pruning this over-large class down to that of ϵ-machines. As an application, we exactly enumerate topological ϵ-machines up to eight states and six-letter alphabets. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

11 pages, 586 KiB  
Article
Stochastic Gradient Descent for Kernel-Based Maximum Correntropy Criterion
by Tiankai Li, Baobin Wang, Chaoquan Peng and Hong Yin
Entropy 2024, 26(12), 1104; https://doi.org/10.3390/e26121104 - 17 Dec 2024
Viewed by 242
Abstract
Maximum correntropy criterion (MCC) has been an important method in machine learning and signal processing communities since it was successfully applied in various non-Gaussian noise scenarios. In comparison with the classical least squares method (LS), which takes only the second-order moment of models [...] Read more.
Maximum correntropy criterion (MCC) has been an important method in machine learning and signal processing communities since it was successfully applied in various non-Gaussian noise scenarios. In comparison with the classical least squares method (LS), which takes only the second-order moment of models into consideration and belongs to the convex optimization problem, MCC captures the high-order information of models that play crucial roles in robust learning, which is usually accompanied by solving the non-convexity optimization problems. As we know, the theoretical research on convex optimizations has made significant achievements, while theoretical understandings of non-convex optimization are still far from mature. Motivated by the popularity of the stochastic gradient descent (SGD) for solving nonconvex problems, this paper considers SGD applied to the kernel version of MCC, which has been shown to be robust to outliers and non-Gaussian data in nonlinear structure models. As the existing theoretical results for the SGD algorithm applied to the kernel MCC are not well established, we present the rigorous analysis for the convergence behaviors and provide explicit convergence rates under some standard conditions. Our work can fill the gap between optimization process and convergence during the iterations: the iterates need to converge to the global minimizer while the obtained estimator cannot ensure the global optimality in the learning process. Full article
(This article belongs to the Special Issue Advances in Probabilistic Machine Learning)
Show Figures

Figure 1

21 pages, 793 KiB  
Article
An Entropy Dynamics Approach to Inferring Fractal-Order Complexity in the Electromagnetics of Solids
by Basanta R. Pahari and William Oates
Entropy 2024, 26(12), 1103; https://doi.org/10.3390/e26121103 - 17 Dec 2024
Viewed by 275
Abstract
A fractal-order entropy dynamics model is developed to create a modified form of Maxwell’s time-dependent electromagnetic equations. The approach uses an information-theoretic method by combining Shannon’s entropy with fractional moment constraints in time and space. Optimization of the cost function leads to a [...] Read more.
A fractal-order entropy dynamics model is developed to create a modified form of Maxwell’s time-dependent electromagnetic equations. The approach uses an information-theoretic method by combining Shannon’s entropy with fractional moment constraints in time and space. Optimization of the cost function leads to a time-dependent Bayesian posterior density that is used to homogenize the electromagnetic fields. Self-consistency between maximizing entropy, inference of Bayesian posterior densities, and a fractal-order version of Maxwell’s equations are developed. We first give a set of relationships for fractal derivative definitions and their relationship to divergence, curl, and Laplacian operators. The fractal-order entropy dynamic framework is then introduced to infer the Bayesian posterior and its application to modeling homogenized electromagnetic fields in solids. The results provide a methodology to help understand complexity from limited electromagnetic data using maximum entropy by formulating a fractal form of Maxwell’s electromagnetic equations. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

14 pages, 546 KiB  
Article
Routing Algorithm Within the Multiple Non-Overlapping Paths’ Approach for Quantum Key Distribution Networks
by Evgeniy O. Kiktenko, Andrey Tayduganov and Aleksey K. Fedorov
Entropy 2024, 26(12), 1102; https://doi.org/10.3390/e26121102 - 16 Dec 2024
Viewed by 315
Abstract
We develop a novel key routing algorithm for quantum key distribution (QKD) networks that utilizes a distribution of keys between remote nodes, i.e., not directly connected by a QKD link, through multiple non-overlapping paths. This approach focuses on the security of a QKD [...] Read more.
We develop a novel key routing algorithm for quantum key distribution (QKD) networks that utilizes a distribution of keys between remote nodes, i.e., not directly connected by a QKD link, through multiple non-overlapping paths. This approach focuses on the security of a QKD network by minimizing potential vulnerabilities associated with individual trusted nodes. The algorithm ensures a balanced allocation of the workload across the QKD network links, while aiming for the target key generation rate between directly connected and remote nodes. We present the results of testing the algorithm on two QKD network models consisting of 6 and 10 nodes. The testing demonstrates the ability of the algorithm to distribute secure keys among the nodes of the network in an all-to-all manner, ensuring that the information-theoretic security of the keys between remote nodes is maintained even when one of the trusted nodes is compromised. These results highlight the potential of the algorithm to improve the performance of QKD networks. Full article
(This article belongs to the Special Issue Quantum Communications Networks: Trends and Challenges)
Show Figures

Figure 1

11 pages, 441 KiB  
Article
Symplectic Bregman Divergences
by Frank Nielsen
Entropy 2024, 26(12), 1101; https://doi.org/10.3390/e26121101 - 16 Dec 2024
Viewed by 282
Abstract
We present a generalization of Bregman divergences in finite-dimensional symplectic vector spaces that we term symplectic Bregman divergences. Symplectic Bregman divergences are derived from a symplectic generalization of the Fenchel–Young inequality which relies on the notion of symplectic subdifferentials. The symplectic Fenchel–Young inequality [...] Read more.
We present a generalization of Bregman divergences in finite-dimensional symplectic vector spaces that we term symplectic Bregman divergences. Symplectic Bregman divergences are derived from a symplectic generalization of the Fenchel–Young inequality which relies on the notion of symplectic subdifferentials. The symplectic Fenchel–Young inequality is obtained using the symplectic Fenchel transform which is defined with respect to the symplectic form. Since symplectic forms can be built generically from pairings of dual systems, we obtain a generalization of Bregman divergences in dual systems obtained by equivalent symplectic Bregman divergences. In particular, when the symplectic form is derived from an inner product, we show that the corresponding symplectic Bregman divergences amount to ordinary Bregman divergences with respect to composite inner products. Some potential applications of symplectic divergences in geometric mechanics, information geometry, and learning dynamics in machine learning are touched upon. Full article
(This article belongs to the Special Issue Information Geometry for Data Analysis)
Show Figures

Figure 1

26 pages, 552 KiB  
Article
Sleep Stage Classification Through HRV, Complexity Measures, and Heart Rate Asymmetry Using Generalized Estimating Equations Models
by Bartosz Biczuk, Sebastian Żurek, Szymon Jurga, Elżbieta Turska, Przemysław Guzik and Jarosław Piskorski
Entropy 2024, 26(12), 1100; https://doi.org/10.3390/e26121100 - 16 Dec 2024
Viewed by 309
Abstract
This study investigates whether heart rate asymmetry (HRA) parameters offer insights into sleep stages beyond those provided by conventional heart rate variability (HRV) and complexity measures. Utilizing 31 polysomnographic recordings, we focused exclusively on electrocardiogram (ECG) data, specifically the RR interval time [...] Read more.
This study investigates whether heart rate asymmetry (HRA) parameters offer insights into sleep stages beyond those provided by conventional heart rate variability (HRV) and complexity measures. Utilizing 31 polysomnographic recordings, we focused exclusively on electrocardiogram (ECG) data, specifically the RR interval time series, to explore heart rate dynamics associated with different sleep stages. Employing both statistical techniques and machine learning models, with the Generalized Estimating Equation model as the foundational approach, we assessed the effectiveness of HRA in identifying and differentiating sleep stages and transitions. The models including asymmetric variables for detecting deep sleep stages, N2 and N3, achieved AUCs of 0.85 and 0.89, respectively, those for transitions N2–R, R–N2, i.e., falling in and out of REM sleep, achieved AUCs of 0.85 and 0.80, and those for W–N1, i.e., falling asleep, an AUC of 0.83. All these models were highly statistically significant. The findings demonstrate that HRA parameters provide significant, independent information about sleep stages that is not captured by HRV and complexity measures alone. This additional insight into sleep physiology potentially leads to a better understanding of hearth rhythm during sleep and devising more precise diagnostic tools, including cheap portable devices, for identifying sleep-related disorders. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

19 pages, 3253 KiB  
Article
Federated Collaborative Learning with Sparse Gradients for Heterogeneous Data on Resource-Constrained Devices
by Mengmeng Li, Xin He and Jinhua Chen
Entropy 2024, 26(12), 1099; https://doi.org/10.3390/e26121099 - 16 Dec 2024
Viewed by 506
Abstract
Federated learning enables devices to train models collaboratively while protecting data privacy. However, the computing power, memory, and communication capabilities of IoT devices are limited, making it difficult to train large-scale models on these devices. To train large models on resource-constrained devices, federated [...] Read more.
Federated learning enables devices to train models collaboratively while protecting data privacy. However, the computing power, memory, and communication capabilities of IoT devices are limited, making it difficult to train large-scale models on these devices. To train large models on resource-constrained devices, federated split learning allows for parallel training of multiple devices by dividing the model into different devices. However, under this framework, the client is heavily dependent on the server’s computing resources, and a large number of model parameters must be transmitted during communication, which leads to low training efficiency. In addition, due to the heterogeneous distribution among clients, it is difficult for the trained global model to apply to all clients. To address these challenges, this paper designs a sparse gradient collaborative federated learning model for heterogeneous data on resource-constrained devices. First, the sparse gradient strategy is designed by introducing the position Mask to reduce the traffic. To minimize accuracy loss, the dequantization strategy is applied to restore the original dense gradient tensor. Second, the influence of each client on the global model is measured by Euclidean distance, and based on this, the aggregation weight is assigned to each client, and an adaptive weight strategy is developed. Finally, the sparse gradient quantization method is combined with an adaptive weighting strategy, and a collaborative federated learning algorithm is designed for heterogeneous data distribution. Extensive experiments demonstrate that the proposed algorithm achieves high classification efficiency, effectively addressing the challenges posed by data heterogeneity. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

10 pages, 589 KiB  
Article
Axiomatic Approach to Measures of Total Correlations
by Gabriel L. Moraes, Renato M. Angelo and Ana C. S. Costa
Entropy 2024, 26(12), 1098; https://doi.org/10.3390/e26121098 - 15 Dec 2024
Viewed by 232
Abstract
Correlations play a pivotal role in various fields of science, particularly in quantum mechanics, yet their proper quantification remains a subject of debate. In this work, we aimed to discuss the challenge of defining a reliable measure of total correlations. We first outlined [...] Read more.
Correlations play a pivotal role in various fields of science, particularly in quantum mechanics, yet their proper quantification remains a subject of debate. In this work, we aimed to discuss the challenge of defining a reliable measure of total correlations. We first outlined the essential properties that an effective correlation measure should satisfy and reviewed existing measures, including quantum mutual information, the p-norm of the correlation matrix, and the recently defined quantum Pearson correlation coefficient. Additionally, we introduced new measures based on Rényi and Tsallis relative entropies, as well as the Kullback–Leibler divergence. Our analysis revealed that while quantum mutual information, the p-norm, and the Pearson measure exhibit equivalence for two-qubit systems, they all suffer from an ordering problem. Despite criticisms regarding its reliability, we argued that QMI remains a valid measure of total correlations. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

12 pages, 2085 KiB  
Article
Stochastic Model for a Piezoelectric Energy Harvester Driven by Broadband Vibrations
by Angelo Sanfelice, Luigi Costanzo, Alessandro Lo Schiavo, Alessandro Sarracino and Massimo Vitelli
Entropy 2024, 26(12), 1097; https://doi.org/10.3390/e26121097 - 14 Dec 2024
Viewed by 365
Abstract
We present an experimental and numerical study of a piezoelectric energy harvester driven by broadband vibrations. This device can extract power from random fluctuations and can be described by a stochastic model, based on an underdamped Langevin equation with white noise, which mimics [...] Read more.
We present an experimental and numerical study of a piezoelectric energy harvester driven by broadband vibrations. This device can extract power from random fluctuations and can be described by a stochastic model, based on an underdamped Langevin equation with white noise, which mimics the dynamics of the piezoelectric material. A crucial point in the modelisation is represented by the appropriate description of the coupled load circuit that is necessary to harvest electrical energy. We consider a linear load (resistance) and a nonlinear load (diode bridge rectifier connected to the parallel of a capacitance and a load resistance), and focus on the characteristic curve of the extracted power as a function of the load resistance, in order to estimate the optimal values of the parameters that maximise the collected energy. In both cases, we find good agreement between the numerical simulations of the theoretical model and the results obtained in experiments. In particular, we observe a non-monotonic behaviour of the characteristic curve which signals the presence of an optimal value for the load resistance at which the extracted power is maximised. We also address a more theoretical issue, related to the inference of the non-equilibrium features of the system from data: we show that the analysis of high-order correlation functions of the relevant variables, when in the presence of nonlinearities, can represent a simple and effective tool to check the irreversible dynamics. Full article
(This article belongs to the Special Issue Control of Driven Stochastic Systems: From Shortcuts to Optimality)
Show Figures

Figure 1

16 pages, 1428 KiB  
Article
A Definition of a Heywood Case in Item Response Theory Based on Fisher Information
by Jay Verkuilen and Peter J. Johnson
Entropy 2024, 26(12), 1096; https://doi.org/10.3390/e26121096 - 14 Dec 2024
Viewed by 339
Abstract
Heywood cases and other improper solutions occur frequently in latent variable models, e.g., factor analysis, item response theory, latent class analysis, multilevel models, or structural equation models, all of which are models with response variables taken from an exponential family. They have important [...] Read more.
Heywood cases and other improper solutions occur frequently in latent variable models, e.g., factor analysis, item response theory, latent class analysis, multilevel models, or structural equation models, all of which are models with response variables taken from an exponential family. They have important consequences for scoring with the latent variable model and are indicative of issues in a model, such as poor identification or model misspecification. In the context of the 2PL and 3PL models in IRT, they are more frequently known as Guttman items and are identified by having a discrimination parameter that is deemed excessively large. Other IRT models, such as the newer asymmetric item response theory (AsymIRT) or polytomous IRT models often have parameters that are not easy to interpret directly, so scanning parameter estimates are not necessarily indicative of the presence of problematic values. The graphical examination of the IRF can be useful but is necessarily subjective and highly dependent on choices of graphical defaults. We propose using the derivatives of the IRF, item Fisher information functions, and our proposed Item Fraction of Total Information (IFTI) decomposition metric to bypass the parameters, allowing for the more concrete and consistent identification of Heywood cases. We illustrate the approach by using empirical examples by using AsymIRT and nominal response models. Full article
(This article belongs to the Special Issue Applications of Fisher Information in Sciences II)
Show Figures

Figure 1

20 pages, 3282 KiB  
Article
A Near-Wall Methodology for Large-Eddy Simulation Based on Dynamic Hybrid RANS-LES
by Michael Tullis and D. Keith Walters
Entropy 2024, 26(12), 1095; https://doi.org/10.3390/e26121095 - 14 Dec 2024
Viewed by 382
Abstract
Attempts to mitigate the computational cost of fully resolved large-eddy simulation (LES) in the near-wall region include both the hybrid Reynolds-averaged Navier–Stokes/LES (HRL) and wall-modeled LES (WMLES) approaches. This paper presents an LES wall treatment method that combines key attributes of the two, [...] Read more.
Attempts to mitigate the computational cost of fully resolved large-eddy simulation (LES) in the near-wall region include both the hybrid Reynolds-averaged Navier–Stokes/LES (HRL) and wall-modeled LES (WMLES) approaches. This paper presents an LES wall treatment method that combines key attributes of the two, in which the boundary layer mesh is sized in the streamwise and spanwise directions comparable to WMLES, and the wall-normal mesh is comparable to a RANS simulation without wall functions. A mixing length model is used to prescribe an eddy viscosity in the near-wall region, with the mixing length scale limited based on local mesh size. The RANS and LES regions are smoothly blended using the dynamic hybrid RANS-LES (DHRL) framework. The results are presented for the turbulent channel flow at two Reynolds numbers, and comparison to the DNS results shows that the mean and fluctuating quantities are reasonably well predicted with no apparent log-layer mismatch. A detailed near-wall meshing strategy for the proposed method is presented, and estimates indicate that it can be implemented with approximately twice the number of grid points as traditional WMLES, while avoiding the difficulties associated with analytical or numerical wall functions and modified wall boundary conditions. Full article
Show Figures

Figure 1

11 pages, 366 KiB  
Article
Perturbational Decomposition Analysis for Quantum Ising Model with Weak Transverse Fields
by Youning Li, Junfeng Huang, Chao Zhang and Jun Li
Entropy 2024, 26(12), 1094; https://doi.org/10.3390/e26121094 - 14 Dec 2024
Viewed by 265
Abstract
This work presents a perturbational decomposition method for simulating quantum evolution under the one-dimensional Ising model with both longitudinal and transverse fields. By treating the transverse field terms as perturbations in the expansion, our approach is particularly effective in systems with moderate longitudinal [...] Read more.
This work presents a perturbational decomposition method for simulating quantum evolution under the one-dimensional Ising model with both longitudinal and transverse fields. By treating the transverse field terms as perturbations in the expansion, our approach is particularly effective in systems with moderate longitudinal fields and weak to moderate transverse fields relative to the coupling strength. Through systematic numerical exploration, we characterize parameter regimes and evolution time windows where the decomposition achieves measurable improvements over conventional Trotter decomposition methods. The developed perturbational approach and its characterized parameter space may provide practical guidance for choosing appropriate simulation strategies in different parameter regimes of the one-dimensional Ising model. Full article
(This article belongs to the Special Issue Quantum Information: Working towards Applications)
Show Figures

Figure 1

12 pages, 2865 KiB  
Article
Thermodynamic Behavior of Doped Graphene: Impact of Heavy Dopant Atoms
by L. Palma-Chilla and Juan A. Lazzús
Entropy 2024, 26(12), 1093; https://doi.org/10.3390/e26121093 - 14 Dec 2024
Viewed by 323
Abstract
This study investigates the effect of incorporating heavy dopant atoms on the topological transitions in the energy spectrum of graphene, as well as on its thermodynamic properties. A tight-binding model is employed that incorporates a lattice composition parameter associated with the dopant’s effect [...] Read more.
This study investigates the effect of incorporating heavy dopant atoms on the topological transitions in the energy spectrum of graphene, as well as on its thermodynamic properties. A tight-binding model is employed that incorporates a lattice composition parameter associated with the dopant’s effect to obtain the electronic spectrum of graphene. Thus, the substitutional atoms in the lattice impact the electronic structure of graphene by altering the connectivity of the Dirac cones and the symmetry of the energy surface in their spectrum. The Gibbs entropy is numerically calculated from the energy surface of the electronic spectrum, and other thermodynamic properties, such as temperature, specific heat, and Helmholtz free energy, are derived from theoretical principles. The results show that topological changes induced by the heavy dopant atoms in the graphene lattice significantly affect its electronic structure and thermodynamic properties, leading to observable changes in the distances between Dirac cones, the range of the energy spectrum, entropy, positive and negative temperatures, divergences in specific heat, and instabilities within the system. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

20 pages, 15341 KiB  
Article
Spontaneous Emergence of Agent Individuality Through Social Interactions in Large Language Model-Based Communities
by Ryosuke Takata, Atsushi Masumori and Takashi Ikegami
Entropy 2024, 26(12), 1092; https://doi.org/10.3390/e26121092 - 13 Dec 2024
Viewed by 411
Abstract
We study the emergence of agency from scratch by using Large Language Model (LLM)-based agents. In previous studies of LLM-based agents, each agent’s characteristics, including personality and memory, have traditionally been predefined. We focused on how individuality, such as behavior, personality, and memory, [...] Read more.
We study the emergence of agency from scratch by using Large Language Model (LLM)-based agents. In previous studies of LLM-based agents, each agent’s characteristics, including personality and memory, have traditionally been predefined. We focused on how individuality, such as behavior, personality, and memory, can be differentiated from an undifferentiated state. The present LLM agents engage in cooperative communication within a group simulation, exchanging context-based messages in natural language. By analyzing this multi-agent simulation, we report valuable new insights into how social norms, cooperation, and personality traits can emerge spontaneously. This paper demonstrates that autonomously interacting LLM-powered agents generate hallucinations and hashtags to sustain communication, which, in turn, increases the diversity of words within their interactions. Each agent’s emotions shift through communication, and as they form communities, the personalities of the agents emerge and evolve accordingly. This computational modeling approach and its findings will provide a new method for analyzing collective artificial intelligence. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop