Entropy doi: 10.3390/e26030269
Authors: Haiju Fan Jinsong Wang
Recent studies on watermarking techniques based on image carriers have demonstrated new approaches that combine adversarial perturbations against steganalysis with embedding distortions. However, while these methods successfully counter convolutional neural network-based steganalysis, they do not adequately protect the data of the carrier itself. Recognizing the high sensitivity of Deep Neural Networks (DNNs) to small perturbations, we propose HAG-NET, a method based on image carriers, which is jointly trained by the encoder, decoder, and attacker. In this paper, the encoder generates Adversarial Steganographic Examples (ASEs) that are adversarial to the target classification network, thereby providing protection for the carrier data. Additionally, the decoder can recover secret data from ASEs. The experimental results demonstrate that ASEs produced by HAG-NET achieve an average success rate of over 99% on both the MNIST and CIFAR-10 datasets. ASEs generated with the attacker exhibit greater robustness in terms of attack ability, with an average increase of about 3.32%. Furthermore, our method, when compared with other generative stego examples under similar perturbation strength, contains significantly more information according to image information entropy measurements.
]]>Entropy doi: 10.3390/e26030268
Authors: Chao Zhao Ali Al-Bashabsheh Chung Chan
We address the challenge of identifying meaningful communities by proposing a model based on convex game theory and a measure of community strength. Many existing community detection methods fail to provide unique solutions, and it remains unclear how the solutions depend on initial conditions. Our approach identifies strong communities with a hierarchical structure, visualizable as a dendrogram, and computable in polynomial time using submodular function minimization. This framework extends beyond graphs to hypergraphs or even polymatroids. In the case when the model is graphical, a more efficient algorithm based on the max-flow min-cut algorithm can be devised. Though not achieving near-linear time complexity, the pursuit of practical algorithms is an intriguing avenue for future research. Our work serves as the foundation, offering an analytical framework that yields unique solutions with clear operational meaning for the communities identified.
]]>Entropy doi: 10.3390/e26030267
Authors: Karl Svozil
Space-time in quantum mechanics is about bridging Hilbert and configuration space. Thereby, an entirely new perspective is obtained by replacing the Newtonian space-time theater with the image of a presumably high-dimensional Hilbert space, through which space-time becomes an epiphenomenon construed by internal observers.
]]>Entropy doi: 10.3390/e26030266
Authors: Henrik Jeldtoft Jensen Piergiulio Tempesta
Entropy can signify different things. For instance, heat transfer in thermodynamics or a measure of information in data analysis. Many entropies have been introduced, and it can be difficult to ascertain their respective importance and merits. Here, we consider entropy in an abstract sense, as a functional on a probability space, and we review how being able to handle the trivial case of non-interacting systems, together with the subtle requirement of extensivity, allows for a systematic classification of the functional form.
]]>Entropy doi: 10.3390/e26030265
Authors: Rubén Gómez González Vicente Garzó
The Boltzmann kinetic equation for dilute granular suspensions under simple (or uniform) shear flow (USF) is considered to determine the non-Newtonian transport properties of the system. In contrast to previous attempts based on a coarse-grained description, our suspension model accounts for the real collisions between grains and particles of the surrounding molecular gas. The latter is modeled as a bath (or thermostat) of elastic hard spheres at a given temperature. Two independent but complementary approaches are followed to reach exact expressions for the rheological properties. First, the Boltzmann equation for the so-called inelastic Maxwell models (IMM) is considered. The fact that the collision rate of IMM is independent of the relative velocity of the colliding spheres allows us to exactly compute the collisional moments of the Boltzmann operator without the knowledge of the distribution function. Thanks to this property, the transport properties of the sheared granular suspension can be exactly determined. As a second approach, a Bhatnagar–Gross–Krook (BGK)-type kinetic model adapted to granular suspensions is solved to compute the velocity moments and the velocity distribution function of the system. The theoretical results (which are given in terms of the coefficient of restitution, the reduced shear rate, the reduced background temperature, and the diameter and mass ratios) show, in general, a good agreement with the approximate analytical results derived for inelastic hard spheres (IHS) by means of Grad’s moment method and with computer simulations performed in the Brownian limiting case (m/mg→∞, where mg and m are the masses of the particles of the molecular and granular gases, respectively). In addition, as expected, the IMM and BGK results show that the temperature and non-Newtonian viscosity exhibit an S shape in a plane of stress–strain rate (discontinuous shear thickening, DST). The DST effect becomes more pronounced as the mass ratio m/mg increases.
]]>Entropy doi: 10.3390/e26030264
Authors: Amira Val Baker Mate Csanad Nicolas Fellas Nour Atassi Ia Mgvdliashvili Paul Oomen
In general, sound waves propagate radially outwards from a point source. These waves will continue in the same direction, decreasing in intensity, unless a boundary condition is met. To arrive at a universal understanding of the relation between frequency and wave propagation within spatial boundaries, we explore the maximum entropy states that are realized as resonant modes. For both circular and polygonal Chladni plates, a model is presented that successfully recreates the nodal line patterns to a first approximation. We discuss the benefits of such a model and the future work necessary to develop the model to its full predictive ability.
]]>Entropy doi: 10.3390/e26030263
Authors: Arash Edrisi Hamza Patwa Jose A. Morales Escalante
Kinetic theory provides modeling of open quantum systems subject to Markovian noise via the Wigner–Fokker–Planck equation, which is an alternate of the Lindblad master equation setting, having the advantage of great physical intuition as it is the quantum equivalent of the classical phase space description. We perform a numerical inspection of the Wehrl entropy for the benchmark problem of a harmonic potential, since the existence of a steady state and its analytical formula have been proven theoretically in this case. When there is friction in the noise terms, no theoretical results on the monotonicity of absolute entropy are available. We provide numerical results of the time evolution of the entropy in the case with friction using a stochastic (Euler–Maruyama-based Monte Carlo) numerical solver. For all the chosen initial conditions studied (all of them Gaussian states), up to the inherent numerical error of the method, one cannot disregard the possibility of monotonic behavior even in the case under study, where the noise includes friction terms.
]]>Entropy doi: 10.3390/e26030262
Authors: Siyi Xu Wenwen Liu Chengpei Wu Junli Li
The No Free Lunch Theorem tells us that no algorithm can beat other algorithms on all types of problems. The algorithm selection structure is proposed to select the most suitable algorithm from a set of algorithms for an unknown optimization problem. This paper introduces an innovative algorithm selection approach called the CNN-HT, which is a two-stage algorithm selection framework. In the first stage, a Convolutional Neural Network (CNN) is employed to classify problems. In the second stage, the Hypothesis Testing (HT) technique is used to suggest the best-performing algorithm based on the statistical analysis of the performance metric of algorithms that address various problem categories. The two-stage approach can adapt to different algorithm combinations without the need to retrain the entire model, and modifications can be made in the second stage only, which is an improvement of one-stage approaches. To provide a more general structure for the classification model, we adopt Exploratory Landscape Analysis (ELA) features of the problem as input and utilize feature selection techniques to reduce the redundant ones. In problem classification, the average accuracy of classifying problems using CNN is 96%, which demonstrates the advantages of CNN compared to Random Forest and Support Vector Machines. After feature selection, the accuracy increases to 98.8%, further improving the classification performance while reducing the computational cost. This demonstrates the effectiveness of the first stage of the CNN-HT method, which provides a basis for algorithm selection. In the experiments, CNN-HT shows the advantages of the second stage algorithm as well as good performance with better average rankings in different algorithm combinations compared to the individual algorithms and another algorithm combination approach.
]]>Entropy doi: 10.3390/e26030261
Authors: Sergey Il’ich Kruglov
We study Einstein’s gravity coupled to nonlinear electrodynamics with two parameters in anti-de Sitter spacetime. Magnetically charged black holes in an extended phase space are investigated. We obtain the mass and metric functions and the asymptotic and corrections to the Reissner–Nordström metric function when the cosmological constant vanishes. The first law of black hole thermodynamics in an extended phase space is formulated and the magnetic potential and the thermodynamic conjugate to the coupling are obtained. We prove the generalized Smarr relation. The heat capacity and the Gibbs free energy are computed and the phase transitions are studied. It is shown that the electric fields of charged objects at the origin and the electrostatic self-energy are finite within the nonlinear electrodynamics proposed.
]]>Entropy doi: 10.3390/e26030260
Authors: Márcio S. Gomes-Filho Pablo de Castro Danilo B. Liarte Fernando A. Oliveira
The Kardar–Parisi–Zhang (KPZ) equation describes a wide range of growth-like phenomena, with applications in physics, chemistry and biology. There are three central questions in the study of KPZ growth: the determination of height probability distributions; the search for ever more precise universal growth exponents; and the apparent absence of a fluctuation–dissipation theorem (FDT) for spatial dimension d>1. Notably, these questions were answered exactly only for 1+1 dimensions. In this work, we propose a new FDT valid for the KPZ problem in d+1 dimensions. This is achieved by rearranging terms and identifying a new correlated noise which we argue to be characterized by a fractal dimension dn. We present relations between the KPZ exponents and two emergent fractal dimensions, namely df, of the rough interface, and dn. Also, we simulate KPZ growth to obtain values for transient versions of the roughness exponent α, the surface fractal dimension df and, through our relations, the noise fractal dimension dn. Our results indicate that KPZ may have at least two fractal dimensions and that, within this proposal, an FDT is restored. Finally, we provide new insights into the old question about the upper critical dimension of the KPZ universality class.
]]>Entropy doi: 10.3390/e26030259
Authors: Alejandro J. Rojas
In this work, we consider the design of power-constrained networked control systems (NCSs) and a differential entropy-based fault-detection mechanism. For the NCS design of the control loop, we consider faults in the plant gain and unstable plant pole locations, either due to natural causes or malicious intent. Since the power-constrained approach utilized in the NCS design is a stationary approach, we then discuss the finite-time approximation of the power constraints for the relevant control loop signals. The network under study is formed by two additive white Gaussian noise (AWGN) channels located on the direct and feedback paths of the closed control loop. The finite-time approximation of the controller output signal allows us to estimate its differential entropy, which is used in our proposed fault-detection mechanism. After fault detection, we propose a fault-identification mechanism that is capable of correctly discriminating faults. Finally, we discuss the extension of the contributions developed here to future research directions, such as fault recovery and control resilience.
]]>Entropy doi: 10.3390/e26030258
Authors: Yutaro Yamada Fred Weiying Zhang Yuval Kluger Ilker Yildirim
Ensuring robustness of image classifiers against adversarial attacks and spurious correlation has been challenging. One of the most effective methods for adversarial robustness is a type of data augmentation that uses adversarial examples during training. Here, inspired by computational models of human vision, we explore a synthesis of this approach by leveraging a structured prior over image formation: the 3D geometry of objects and how it projects to images. We combine adversarial training with a weight initialization that implicitly encodes such a prior about 3D objects via 3D reconstruction pre-training. We evaluate our approach using two different datasets and compare it to alternative pre-training protocols that do not encode a prior about 3D shape. To systematically explore the effect of 3D pre-training, we introduce a novel dataset called Geon3D, which consists of simple shapes that nevertheless capture variation in multiple distinct dimensions of geometry. We find that while 3D reconstruction pre-training does not improve robustness for the simplest dataset setting, we consider (Geon3D on a clean background) that it improves upon adversarial training in more realistic (Geon3D with textured background and ShapeNet) conditions. We also find that 3D pre-training coupled with adversarial training improves the robustness to spurious correlations between shape and background textures. Furthermore, we show that the benefit of using 3D-based pre-training outperforms 2D-based pre-training on ShapeNet. We hope that these results encourage further investigation of the benefits of structured, 3D-based models of vision for adversarial robustness.
]]>Entropy doi: 10.3390/e26030257
Authors: Lucas Maquedano Ana C. S. Costa
The effect of quantum steering describes a possible action at a distance via local measurements. In the last few years, several criteria have been proposed to detect this type of correlation in quantum systems. However, there are few approaches presented in order to measure the degree of steerability of a given system. In this work, we are interested in investigating possible ways to quantify quantum steering, where we based our analysis on different criteria presented in the literature.
]]>Entropy doi: 10.3390/e26030256
Authors: Beatriz Arregui-García Antonio Longa Quintino Francesco Lotito Sandro Meloni Giulia Cencetti
The analysis of complex and time-evolving interactions, such as those within social dynamics, represents a current challenge in the science of complex systems. Temporal networks stand as a suitable tool for schematizing such systems, encoding all the interactions appearing between pairs of individuals in discrete time. Over the years, network science has developed many measures to analyze and compare temporal networks. Some of them imply a decomposition of the network into small pieces of interactions; i.e., only involving a few nodes for a short time range. Along this line, a possible way to decompose a network is to assume an egocentric perspective; i.e., to consider for each node the time evolution of its neighborhood. This was proposed by Longa et al. by defining the “egocentric temporal neighborhood”, which has proven to be a useful tool for characterizing temporal networks relative to social interactions. However, this definition neglects group interactions (quite common in social domains), as they are always decomposed into pairwise connections. A more general framework that also allows considering larger interactions is represented by higher-order networks. Here, we generalize the description of social interactions to hypergraphs. Consequently, we generalize their decomposition into “hyper egocentric temporal neighborhoods”. This enables the analysis of social interactions, facilitating comparisons between different datasets or nodes within a dataset, while considering the intrinsic complexity presented by higher-order interactions. Even if we limit the order of interactions to the second order (triplets of nodes), our results reveal the importance of a higher-order representation.In fact, our analyses show that second-order structures are responsible for the majority of the variability at all scales: between datasets, amongst nodes, and over time.
]]>Entropy doi: 10.3390/e26030255
Authors: J. Gerhard Müller
It is argued that all physical knowledge ultimately stems from observation and that the simplest possible observation is that an event has happened at a certain space–time location X→=x→,t. Considering historic experiments, which have been groundbreaking in the evolution of our modern ideas of matter on the atomic, nuclear, and elementary particle scales, it is shown that such experiments produce as outputs streams of macroscopically observable events which accumulate in the course of time into spatio-temporal patterns of events whose forms allow decisions to be taken concerning conceivable alternatives of explanation. Working towards elucidating the physical and informational characteristics of those elementary observations, we show that these represent hugely amplified images of the initiating micro-events and that the resulting macro-images have a cognitive value of 1 bit and a physical value of Wobs=Eobsτobs≫h. In this latter equation, Eobs stands for the energy spent in turning the initiating micro-events into macroscopically observable events, τobs for the lifetimes during which the generated events remain macroscopically observable, and h for Planck’s constant. The relative value Gobs=Wobs/h finally represents a measure of amplification that was gained in the observation process.
]]>Entropy doi: 10.3390/e26030254
Authors: Yuli Yang Ruiyun Chang Xiufang Feng Peizhen Li Yongle Chen Hao Zhang
The drawbacks of a one-dimensional chaotic map are its straightforward structure, abrupt intervals, and ease of signal prediction. Richer performance and a more complicated structure are required for multidimensional chaotic mapping. To address the shortcomings of current chaotic systems, an n-dimensional cosine-transform-based chaotic system (nD-CTBCS) with a chaotic coupling model is suggested in this study. To create chaotic maps of any desired dimension, nD-CTBCS can take advantage of already-existing 1D chaotic maps as seed chaotic maps. Three two-dimensional chaotic maps are provided as examples to illustrate the impact. The findings of the evaluation and experiments demonstrate that the newly created chaotic maps function better, have broader chaotic intervals, and display hyperchaotic behavior. To further demonstrate the practicability of nD-CTBCS, a reversible data hiding scheme is proposed for the secure communication of medical images. The experimental results show that the proposed method has higher security than the existing methods.
]]>Entropy doi: 10.3390/e26030253
Authors: Lou Zhao Yuliang Zhang Minjie Zhang Chunshan Liu
Millimeter-wave (mmWave) communication systems leverage the directional beamforming capabilities of antenna arrays equipped at the base stations (BS) to counteract the inherent high propagation path loss characteristic of mmWave channels. In downlink mmWave transmissions, i.e., from the BS to users, distinguishing users within the same beam direction poses a significant challenge. Additionally, digital baseband precoding techniques are limited in their ability to mitigate inter-user interference within identical beam directions, representing a fundamental constraint in mmWave downlink transmissions. This study introduces an innovative analog beamforming-based interference mitigation strategy for downlink transmissions in reconfigurable intelligent surface (RIS)-assisted hybrid analog–digital (HAD) mmWave systems. This is achieved through the joint design of analog beamformers and the corresponding coefficients at both the RIS and the BS. We first present derived closed-form approximation expressions for the achievable rate performance in the proposed scenario and establish a stringent upper bound on this performance in a large number of RIS elements regimes. The exclusive use of analog beamforming in the downlink phase allows our proposed transmission algorithm to function efficiently when equipped with low-resolution analog-to-digital/digital-to-analog converters (A/Ds) at the BS. The energy efficiency of the downlink transmission is evaluated through the deployment of six-bit A/Ds and six-bit pulse-amplitude modulation (PAM) signals across varying numbers of activated RIS elements. Numerical simulation results validate the effectiveness of our proposed algorithms in comparison to various benchmark schemes.
]]>Entropy doi: 10.3390/e26030252
Authors: Ravid Shwartz Ziv Yann LeCun
Deep neural networks excel in supervised learning tasks but are constrained by the need for extensive labeled data. Self-supervised learning emerges as a promising alternative, allowing models to learn without explicit labels. Information theory has shaped deep neural networks, particularly the information bottleneck principle. This principle optimizes the trade-off between compression and preserving relevant information, providing a foundation for efficient network design in supervised contexts. However, its precise role and adaptation in self-supervised learning remain unclear. In this work, we scrutinize various self-supervised learning approaches from an information-theoretic perspective, introducing a unified framework that encapsulates the self-supervised information-theoretic learning problem. This framework includes multiple encoders and decoders, suggesting that all existing work on self-supervised learning can be seen as specific instances. We aim to unify these approaches to understand their underlying principles better and address the main challenge: many works present different frameworks with differing theories that may seem contradictory. By weaving existing research into a cohesive narrative, we delve into contemporary self-supervised methodologies, spotlight potential research areas, and highlight inherent challenges. Moreover, we discuss how to estimate information-theoretic quantities and their associated empirical problems. Overall, this paper provides a comprehensive review of the intersection of information theory, self-supervised learning, and deep neural networks, aiming for a better understanding through our proposed unified approach.
]]>Entropy doi: 10.3390/e26030251
Authors: Lucas Alonso Guilherme C. Matos François Impens Paulo A. Maia Neto Reinaldo de Melo e Souza
A mirror subjected to a fast mechanical oscillation emits photons out of the quantum vacuum—a phenomenon known as the dynamical Casimir effect (DCE). The mirror is usually treated as an infinite metallic surface. Here, we show that, in realistic experimental conditions (mirror size and oscillation frequency), this assumption is inadequate and drastically overestimates the DCE radiation. Taking the opposite limit, we use instead the dipolar approximation to obtain a simpler and more realistic treatment of DCE for macroscopic bodies. Our approach is inspired by a microscopic theory of DCE, which is extended to the macroscopic realm by a suitable effective Hamiltonian description of moving anisotropic scatterers. We illustrate the benefits of our approach by considering the DCE from macroscopic bodies of different geometries.
]]>Entropy doi: 10.3390/e26030250
Authors: Wuqu Wang Zhe Tao Nan Liu Wei Kang
D2D coded caching, originally introduced by Ji, Caire, and Molisch, significantly improves communication efficiency by applying the multi-cast technology proposed by Maddah-Ali and Niesen to the D2D network. Most prior works on D2D coded caching are based on the assumption that all users will request content at the beginning of the delivery phase. However, in practice, this is often not the case. Motivated by this consideration, this paper formulates a new problem called request-robust D2D coded caching. The considered problem includes K users and a content server with access to N files. Only r users, known as requesters, request a file each at the beginning of the delivery phase. The objective is to minimize the average and worst-case delivery rate, i.e., the average and worst-case number of broadcast bits from all users among all possible demands. For this novel D2D coded caching problem, we propose a scheme based on uncoded cache placement and exploiting common demands and one-shot delivery. We also propose information-theoretic converse results under the assumption of uncoded cache placement. Furthermore, we adapt the scheme proposed by Yapar et al. for uncoded cache placement and one-shot delivery to the request-robust D2D coded caching problem and prove that the performance of the adapted scheme is order optimal within a factor of two under uncoded cache placement and within a factor of four in general. Finally, through numerical evaluations, we show that the proposed scheme outperforms known D2D coded caching schemes applied to the request-robust scenario for most cache size ranges.
]]>Entropy doi: 10.3390/e26030249
Authors: Abhisek Chakraborty Anirban Bhattacharya Debdeep Pati
We commonly encounter the problem of identifying an optimally weight-adjusted version of the empirical distribution of observed data, adhering to predefined constraints on the weights. Such constraints often manifest as restrictions on the moments, tail behavior, shapes, number of modes, etc., of the resulting weight-adjusted empirical distribution. In this article, we substantially enhance the flexibility of such a methodology by introducing a nonparametrically imbued distributional constraint on the weights and developing a general framework leveraging the maximum entropy principle and tools from optimal transport. The key idea is to ensure that the maximum entropy weight-adjusted empirical distribution of the observed data is close to a pre-specified probability distribution in terms of the optimal transport metric, while allowing for subtle departures. The proposed scheme for the re-weighting of observations subject to constraints is reminiscent of the empirical likelihood and related ideas, but offers greater flexibility in applications where parametric distribution-guided constraints arise naturally. The versatility of the proposed framework is demonstrated in the context of three disparate applications where data re-weighting is warranted to satisfy side constraints on the optimization problem at the heart of the statistical task—namely, portfolio allocation, semi-parametric inference for complex surveys, and ensuring algorithmic fairness in machine learning algorithms.
]]>Entropy doi: 10.3390/e26030248
Authors: Peng Peng Tianlong Fan Linyuan Lü
Diverse higher-order structures, foundational for supporting a network’s “meta-functions”, play a vital role in structure, functionality, and the emergence of complex dynamics. Nevertheless, the problem of dismantling them has been consistently overlooked. In this paper, we introduce the concept of dismantling higher-order structures, with the objective of disrupting not only network connectivity but also eradicating all higher-order structures in each branch, thereby ensuring thorough functional paralysis. Given the diversity and unknown specifics of higher-order structures, identifying and targeting them individually is not practical or even feasible. Fortunately, their close association with k-cores arises from their internal high connectivity. Thus, we transform higher-order structure measurement into measurements on k-cores with corresponding orders. Furthermore, we propose the Belief Propagation-guided Higher-order Dismantling (BPHD) algorithm, minimizing dismantling costs while achieving maximal disruption to connectivity and higher-order structures, ultimately converting the network into a forest. BPHD exhibits the explosive vulnerability of network higher-order structures, counterintuitively showcasing decreasing dismantling costs with increasing structural complexity. Our findings offer a novel approach for dismantling malignant networks, emphasizing the substantial challenges inherent in safeguarding against such malicious attacks.
]]>Entropy doi: 10.3390/e26030247
Authors: Wei Zhou Jin Chen Bingqing Ding
In the original publication [...]
]]>Entropy doi: 10.3390/e26030246
Authors: Gilberto M. Kremer
Self-gravitating fluid instabilities are analysed within the framework of a post-Newtonian Boltzmann equation coupled with the Poisson equations for the gravitational potentials of the post-Newtonian theory. The Poisson equations are determined from the knowledge of the energy–momentum tensor calculated from a post-Newtonian Maxwell–Jüttner distribution function. The one-particle distribution function and the gravitational potentials are perturbed from their background states, and the perturbations are represented by plane waves characterised by a wave number vector and time-dependent small amplitudes. The time-dependent amplitude of the one-particle distribution function is supposed to be a linear combination of the summational invariants of the post-Newtonian kinetic theory. From the coupled system of differential equations for the time-dependent amplitudes of the one-particle distribution function and gravitational potentials, an evolution equation for the mass density contrast is obtained. It is shown that for perturbation wavelengths smaller than the Jeans wavelength, the mass density contrast propagates as harmonic waves in time. For perturbation wavelengths greater than the Jeans wavelength, the mass density contrast grows in time, and the instability growth in the post-Newtonian theory is more accentuated than the one of the Newtonian theory.
]]>Entropy doi: 10.3390/e26030244
Authors: Sajani Vithana Sennur Ulukus
We introduce the problem of deceptive information retrieval (DIR), in which a user wishes to download a required file out of multiple independent files stored in a system of databases while deceiving the databases by making the databases’ predictions on the user-required file index incorrect with high probability. Conceptually, DIR is an extension of private information retrieval (PIR). In PIR, a user downloads a required file without revealing its index to any of the databases. The metric of deception is defined as the probability of error of databases’ prediction on the user-required file, minus the corresponding probability of error in PIR. The problem is defined on time-sensitive data that keep updating from time to time. In the proposed scheme, the user deceives the databases by sending real queries to download the required file at the time of the requirement and dummy queries at multiple distinct future time instances to manipulate the probabilities of sending each query for each file requirement, using which the databases’ make the predictions on the user-required file index. The proposed DIR scheme is based on a capacity achieving probabilistic PIR scheme, and achieves rates lower than the PIR capacity due to the additional downloads made to deceive the databases. When the required level of deception is zero, the proposed scheme achieves the PIR capacity.
]]>Entropy doi: 10.3390/e26030245
Authors: Sivanandam Sivasankaran Marimuthu Bhuvaneswari Abdullah K. Alzahrani
In this study, numerical simulations are conducted with the goal of exploring the impact of the direction of the moving wall, solute and thermal transport, and entropy production on doubly diffusive convection in a chamber occupied by a Casson liquid. Wall movement has a significant impact on convective flow, which, in turn, affects the rate of mass and heat transfer; this sparked our interest in conducting further analysis. The left and right (upright) walls are preserved with constant (but different) thermal and solutal distributions, while the horizontal boundaries are impermeable to mass transfer and insulated from heat transfer. Numerical solutions are acquired using the control volume technique. Outcomes under a variety of Casson fluid parameters, including Ri, Gr, buoyancy ratio, and direction of the moving wall(s), are explored, and the influences of entropy generation are comprehensively investigated. While the flow field consists of a single cell in case I, it is dual-cellular in case III for all values of the considered parameters. Comparing the three cases, the average heat and mass transport presented lower values in case III due to the movement of an isothermal (left) wall against the buoyant force, while these values are enhanced in case I. The obtained results are expected to be useful in thermal engineering, material, food, and chemical processing applications.
]]>Entropy doi: 10.3390/e26030243
Authors: Jianhua Cheng Zili Wang Bing Qi He Wang
Combined SINS/GPS navigation systems have been widely used. However, when the traditional combined SINS/GPS navigation system travels between tall buildings, in the shade of trees, or through tunnels, the GPS encounters frequent signal blocking, which leads to the interruption of GPS signals, and as a result, the combined SINS/GPS-based navigation method degenerates into a pure inertial guidance system, which will lead to the accumulation of navigation errors. In this paper, an adaptive Kalman filtering algorithm based on polynomial fitting and a Taylor expansion is proposed. Through the navigation information output from the inertial guidance system, the polynomial interpolation method is used to construct the velocity equation and position equation of the carrier, and then the Taylor expansion is used to construct the virtual measurement at the moment of the GPS signal interruption, which can make up for the impact of the lack of measurement information on the combined SINS/GPS navigation system when the GPS signal is interrupted. The results of computer simulation experiments and road measurement tests based on the loosely combined SINS/GPS navigation system show that when the carrier faces a GPS signal interruption situation, compared with a combined SINS/GPS navigation algorithm that does not take any rescue measures, our proposed combined SINS/GPS navigation algorithm possesses a higher accuracy in the attitude angle estimation, a higher accuracy in the velocity estimation, and a higher accuracy in the positional localization, and the system possesses higher stability.
]]>Entropy doi: 10.3390/e26030242
Authors: Zhuoen Lu Xun He Dewei Yi Jie Sui
Using electroencephalogram (EEG), we tested the hypothesis that the association of a neutral stimulus with the self would elicit ultra-fast neural responses from early top-down feedback modulation to late feedforward periods for cognitive processing, resulting in self-prioritization in information processing. In two experiments, participants first learned three associations between personal labels (self, friend, stranger) and geometric shapes (Experiment 1) and three colors (Experiment 2), and then they judged whether the shape/color–label pairings matched. Stimuli in Experiment 2 were shown in a social communicative setting with two avatars facing each other, one aligned with the participant’s view (first-person perspective) and the other with a third-person perspective. The color was present on the t-shirt of one avatar. This setup allowed for an examination of how social contexts (i.e., perspective taking) affect neural connectivity mediating self-related processing. Functional connectivity analyses in the alpha band (8–12 Hz) revealed that self–other discrimination was mediated by two distinct phases of neural couplings between frontal and occipital regions, involving an early phase of top-down feedback modulation from frontal to occipital areas followed by a later phase of feedforward signaling from occipital to frontal regions. Moreover, while social communicative settings influenced the later feedforward connectivity phase, they did not alter the early feedback coupling. The results indicate that regardless of stimulus type and social context, the early phase of neural connectivity represents an enhanced state of awareness towards self-related stimuli, whereas the later phase of neural connectivity may be associated with cognitive processing of socially meaningful stimuli.
]]>Entropy doi: 10.3390/e26030241
Authors: Hiroshi Frusawa
On approaching the dynamical transition temperature, supercooled liquids show heterogeneity over space and time. Static replica theory investigates the dynamical crossover in terms of the free energy landscape (FEL). Two kinds of static approaches have provided a self-consistent equation for determining this crossover, similar to the mode coupling theory for glassy dynamics. One uses the Morita–Hiroike formalism of the liquid state theory, whereas the other relies on the density functional theory (DFT). Each of the two approaches has advantages in terms of perturbative field theory. Here, we develop a replica field theory that has the benefits from both formulations. We introduce the generalized Franz–Parisi potential to formulate a correlation functional. Considering fluctuations around an inhomogeneous density determined by the Ramakrishnan–Yussouf DFT, we find a new closure as the stability condition of the correlation functional. The closure leads to the self-consistent equation involving the triplet direct correlation function. The present field theory further helps us study the FEL beyond the mean-field approximation.
]]>Entropy doi: 10.3390/e26030240
Authors: Urszula Bentkowska Wojciech Gałka Marcin Mrukowicz Aleksander Wojtowicz
The purpose of the study is to propose a multi-class ensemble classifier using interval modeling dedicated to microarray datasets. An approach of creating the uncertainty intervals for the single prediction values of constituent classifiers and then aggregating the obtained intervals with the use of interval-valued aggregation functions is used. The proposed heterogeneous classification employs Random Forest, Support Vector Machines, and Multilayer Perceptron as component classifiers, utilizing cross-entropy to select the optimal classifier. Moreover, orders for intervals are applied to determine the decision class of an object. The applied interval-valued aggregation functions are tested in terms of optimizing the performance of the considered ensemble classifier. The proposed model’s quality, superior to other well-known and component classifiers, is validated through comparison, demonstrating the efficacy of cross-entropy in ensemble model construction.
]]>Entropy doi: 10.3390/e26030239
Authors: Zunguan Fan Yifan Feng Kang Wang Xiaoli Li
Efficient flotation beneficiation heavily relies on accurate flotation condition recognition based on monitored froth video. However, the recognition accuracy is hindered by limitations of extracting temporal features from froth videos and establishing correlations between complex multi-modal high-order data. To address the difficulties of inadequate temporal feature extraction, inaccurate online condition detection, and inefficient flotation process operation, this paper proposes a novel flotation condition recognition method named the multi-modal temporal hypergraph neural network (MTHGNN) to extract and fuse multi-modal temporal features. To extract abundant dynamic texture features from froth images, the MTHGNN employs an enhanced version of the local binary pattern algorithm from three orthogonal planes (LBP-TOP) and incorporates additional features from the three-dimensional space as supplements. Furthermore, a novel multi-view temporal feature aggregation network (MVResNet) is introduced to extract temporal aggregation features from the froth image sequence. By constructing a temporal multi-modal hypergraph neural network, we encode complex high-order temporal features, establish robust associations between data structures, and flexibly model the features of froth image sequence, thus enabling accurate flotation condition identification through the fusion of multi-modal temporal features. The experimental results validate the effectiveness of the proposed method for flotation condition recognition, providing a foundation for optimizing flotation operations.
]]>Entropy doi: 10.3390/e26030238
Authors: Steven Gimbel
Ludwig Boltzmann’s move in his seminal paper of 1877, introducing a statistical understanding of entropy, was a watershed moment in the history of physics. The work not only introduced quantization and provided a new understanding of entropy, it challenged the understanding of what a law of nature could be. Traditionally, nomological necessity, that is, specifying the way in which a system must develop, was considered an essential element of proposed physical laws. Yet, here was a new understanding of the Second Law of Thermodynamics that no longer possessed this property. While it was a new direction in physics, in other important scientific discourses of that time—specifically Huttonian geology and Darwinian evolution, similar approaches were taken in which a system’s development followed principles, but did so in a way that both provided a direction of time and allowed for non-deterministic, though rule-based, time evolution. Boltzmann referred to both of these theories, especially the work of Darwin, frequently. The possibility that Darwin influenced Boltzmann’s thought in physics can be seen as being supported by Boltzmann’s later writings.
]]>Entropy doi: 10.3390/e26030237
Authors: Jude A. Osara Michael D. Bryant
Entropy generation, formulated by combining the first and second laws of thermodynamics with an appropriate thermodynamic potential, emerges as the difference between a phenomenological entropy function and a reversible entropy function. The phenomenological entropy function is evaluated over an irreversible path through thermodynamic state space via real-time measurements of thermodynamic states. The reversible entropy function is calculated along an ideal reversible path through the same state space. Entropy generation models for various classes of systems—thermal, externally loaded, internally reactive, open and closed––are developed via selection of suitable thermodynamic potentials. Here- we simplify thermodynamic principles to specify convenient and consistently accurate system governing equations and characterization models. The formulations introduce a new and universal Phenomenological Entropy Generation (PEG) theorem. The systems and methods presented––and demonstrated on frictional wear, grease degradation, battery charging and discharging, metal fatigue and pump flow––can be used for design, analysis, and support of diagnostic monitoring and optimization.
]]>Entropy doi: 10.3390/e26030236
Authors: Zhan Chen Wenxing Fu Ruitao Zhang Yangwang Fang Zhun Xiao
The problem of state estimation based on bearing-only sensors is increasingly important while existing research on distributed filtering solutions is rather limited. Therefore, this paper proposed the novel distributed cubature information filtering (DCIF) method for addressing the state estimation challenge in bearing-only sensor networks. Firstly, the system model of the bearing-only sensor network was constructed, and the observability of the system was analyzed. The sensor nodes are paired to measure relative angle information. Subsequently, the coordinated consistency theory is employed to achieve a unified state estimation of the maneuvering target. The DCIF method enhances the observability of the system, addressing the issues of large accuracy errors and divergence in traditional nonlinear filtering algorithms. Building upon the theoretical proof of consistency convergence in DCIF, four simulation experiments were conducted for comparison. These experiments validate the effectiveness and superiority of the DCIF method in bearing-only sensor networks.
]]>Entropy doi: 10.3390/e26030235
Authors: Prasoon Kumar Vinodkumar Dogus Karabulut Egils Avots Cagri Ozcinar Gholamreza Anbarjafari
The research groups in computer vision, graphics, and machine learning have dedicated a substantial amount of attention to the areas of 3D object reconstruction, augmentation, and registration. Deep learning is the predominant method used in artificial intelligence for addressing computer vision challenges. However, deep learning on three-dimensional data presents distinct obstacles and is now in its nascent phase. There have been significant advancements in deep learning specifically for three-dimensional data, offering a range of ways to address these issues. This study offers a comprehensive examination of the latest advancements in deep learning methodologies. We examine many benchmark models for the tasks of 3D object registration, augmentation, and reconstruction. We thoroughly analyse their architectures, advantages, and constraints. In summary, this report provides a comprehensive overview of recent advancements in three-dimensional deep learning and highlights unresolved research areas that will need to be addressed in the future.
]]>Entropy doi: 10.3390/e26030234
Authors: Nicholas J. Russell Kevin R. Pilkiewicz Michael L. Mayo
Studies of collective motion have heretofore been dominated by a thermodynamic perspective in which the emergent “flocked” phases are analyzed in terms of their time-averaged orientational and spatial properties. Studies that attempt to scrutinize the dynamical processes that spontaneously drive the formation of these flocks from initially random configurations are far more rare, perhaps owing to the fact that said processes occur far from the eventual long-time steady state of the system and thus lie outside the scope of traditional statistical mechanics. For systems whose dynamics are simulated numerically, the nonstationary distribution of system configurations can be sampled at different time points, and the time evolution of the average structural properties of the system can be quantified. In this paper, we employ this strategy to characterize the spatial dynamics of the standard Vicsek flocking model using two correlation functions common to condensed matter physics. We demonstrate, for modest system sizes with 800 to 2000 agents, that the self-assembly dynamics can be characterized by three distinct and disparate time scales that we associate with the corresponding physical processes of clustering (compaction), relaxing (expansion), and mixing (rearrangement). We further show that the behavior of these correlation functions can be used to reliably distinguish between phenomenologically similar models with different underlying interactions and, in some cases, even provide a direct measurement of key model parameters.
]]>Entropy doi: 10.3390/e26030233
Authors: Milan Lopuhaä-Zwakenberg Jasper Goseling
We consider privacy mechanisms for releasing data X=(S,U), where S is sensitive and U is non-sensitive. We introduce the robust local differential privacy (RLDP) framework, which provides strong privacy guarantees, while preserving utility. This is achieved by providing robust privacy: our mechanisms do not only provide privacy with respect to a publicly available estimate of the unknown true distribution, but also with respect to similar distributions. Such robustness mitigates the potential privacy leaks that might arise from the difference between the true distribution and the estimated one. At the same time, we mitigate the utility penalties that come with ordinary differential privacy, which involves making worst-case assumptions and dealing with extreme cases. We achieve robustness in privacy by constructing an uncertainty set based on a Rényi divergence. By analyzing the structure of this set and approximating it with a polytope, we can use robust optimization to find mechanisms with high utility. However, this relies on vertex enumeration and becomes computationally inaccessible for large input spaces. Therefore, we also introduce two low-complexity algorithms that build on existing LDP mechanisms. We evaluate the utility and robustness of the mechanisms using numerical experiments and demonstrate that our mechanisms provide robust privacy, while achieving a utility that is close to optimal.
]]>Entropy doi: 10.3390/e26030232
Authors: Yuan Yu
For a family of stochastic differential equations driven by additive Gaussian noise, we study the asymptotic behaviors of its corresponding Euler–Maruyama scheme by deriving its convergence rate in terms of relative entropy. Our results for the convergence rate in terms of relative entropy complement the conventional ones in the strong and weak sense and induce some other properties of the Euler–Maruyama scheme. For example, the convergence in terms of the total variation distance can be implied by Pinsker’s inequality directly. Moreover, when the drift is β(0<β<1)-Hölder continuous in the spatial variable, the convergence rate in terms of the weighted variation distance is also established. Both of these convergence results do not seem to be directly obtained from any other convergence results of the Euler–Maruyama scheme. The main tool this paper relies on is the Girsanov transform.
]]>Entropy doi: 10.3390/e26030231
Authors: Andrés Arango-Restrepo J. Miguel Rubi
Symmetry breaking is a phenomenon that is observed in various contexts, from the early universe to complex organisms, and it is considered a key puzzle in understanding the emergence of life. The importance of this phenomenon is underscored by the prevalence of enantiomeric amino acids and proteins.The presence of enantiomeric amino acids and proteins highlights its critical role. However, the origin of symmetry breaking has yet to be comprehensively explained, particularly from an energetic standpoint. This article explores a novel approach by considering energy dissipation, specifically lost free energy, as a crucial factor in elucidating symmetry breaking. By conducting a comprehensive thermodynamic analysis applicable across scales, ranging from elementary particles to aggregated structures such as crystals, we present experimental evidence establishing a direct link between nonequilibrium free energy and energy dissipation during the formation of the structures. Results emphasize the pivotal role of energy dissipation, not only as an outcome but as the trigger for symmetry breaking. This insight suggests that understanding the origins of complex systems, from cells to living beings and the universe itself, requires a lens focused on nonequilibrium processes
]]>Entropy doi: 10.3390/e26030229
Authors: Guanling Li Wenlei Zhao
We investigate both theoretically and numerically the dynamics of out-of-time-ordered correlators (OTOCs) in quantum resonance conditions for a kicked rotor model. We employ various operators to construct OTOCs in order to thoroughly quantify their commutation relation at different times, therefore unveiling the process of quantum scrambling. With the help of quantum resonance condition, we have deduced the exact expressions of quantum states during both forward evolution and time reversal, which enables us to establish the laws governing OTOCs’ time dependence. We find interestingly that the OTOCs of different types increase in a quadratic function of time, breaking the freezing of quantum scrambling induced by the dynamical localization under non-resonance condition. The underlying mechanism is discovered, and the possible applications in quantum entanglement are discussed.
]]>Entropy doi: 10.3390/e26030230
Authors: Sabre Kais
Phase transitions happen at critical values of the controlling parameters, such as the critical temperature in classical phase transitions, and system critical parameters in the quantum case. However, true criticality happens only at the thermodynamic limit, when the number of particles goes to infinity with constant density. To perform the calculations for the critical parameters, a finite-size scaling approach was developed to extrapolate information from a finite system to the thermodynamic limit. With the advancement in the experimental and theoretical work in the field of ultra-cold systems, particularly trapping and controlling single atomic and molecular systems, one can ask: do finite systems exhibit quantum phase transition? To address this question, finite-size scaling for finite systems was developed to calculate the quantum critical parameters. The recent observation of a quantum phase transition in a single trapped 171 Yb+ ion indicates the possibility of quantum phase transitions in finite systems. This perspective focuses on examining chemical processes at ultra-cold temperatures, as quantum phase transitions—particularly the formation and dissociation of chemical bonds—are the basic processes for understanding the whole of chemistry.
]]>Entropy doi: 10.3390/e26030228
Authors: Sisi Ma Roshan Tourani
The knowledge of the causal mechanisms underlying one single system may not be sufficient to answer certain questions. One can gain additional insights from comparing and contrasting the causal mechanisms underlying multiple systems and uncovering consistent and distinct causal relationships. For example, discovering common molecular mechanisms among different diseases can lead to drug repurposing. The problem of comparing causal mechanisms among multiple systems is non-trivial, since the causal mechanisms are usually unknown and need to be estimated from data. If we estimate the causal mechanisms from data generated from different systems and directly compare them (the naive method), the result can be sub-optimal. This is especially true if the data generated by the different systems differ substantially with respect to their sample sizes. In this case, the quality of the estimated causal mechanisms for the different systems will differ, which can in turn affect the accuracy of the estimated similarities and differences among the systems via the naive method. To mitigate this problem, we introduced the bootstrap estimation and the equal sample size resampling estimation method for estimating the difference between causal networks. Both of these methods use resampling to assess the confidence of the estimation. We compared these methods with the naive method in a set of systematically simulated experimental conditions with a variety of network structures and sample sizes, and using different performance metrics. We also evaluated these methods on various real-world biomedical datasets covering a wide range of data designs.
]]>Entropy doi: 10.3390/e26030227
Authors: Shixiang Han Guanghui Yan Huayan Pei Wenwen Chang
In order to investigate the impact of two immunization strategies—vaccination targeting susceptible individuals to reduce their infection rate and clinical medical interventions targeting infected individuals to enhance their recovery rate—on the spread of infectious diseases in complex networks, this study proposes a bilinear SIR infectious disease model that considers bidirectional immunization. By analyzing the conditions for the existence of endemic equilibrium points, we derive the basic reproduction numbers and outbreak thresholds for both homogeneous and heterogeneous networks. The epidemic model is then reconstructed and extensively analyzed using continuous-time Markov chain (CTMC) methods. This analysis includes the investigation of transition probabilities, transition rate matrices, steady-state distributions, and the transition probability matrix based on the embedded chain. In numerical simulations, a notable concordance exists between the outcomes of CTMC and mean-field (MF) simulations, thereby substantiating the efficacy of the CTMC model. Moreover, the CTMC-based model adeptly captures the inherent stochastic fluctuation in the disease transmission, which is consistent with the mathematical properties of Markov chains. We further analyze the relationship between the system’s steady-state infection density and the immunization rate through MCS. The results suggest that the infection density decreases with an increase in the immunization rate among susceptible individuals. The current research results will enhance our understanding of infectious disease transmission patterns in real-world scenarios, providing valuable theoretical insights for the development of epidemic prevention and control strategies.
]]>Entropy doi: 10.3390/e26030226
Authors: Lin Yang
We propose a two-sample testing procedure for high-dimensional time series. To obtain the asymptotic distribution of our ℓ∞-type test statistic under the null hypothesis, we establish high-dimensional central limit theorems (HCLTs) for an α-mixing sequence. Specifically, we derive two HCLTs for the maximum of a sum of high-dimensional α-mixing random vectors under the assumptions of bounded finite moments and exponential tails, respectively. The proposed HCLT for α-mixing sequence under bounded finite moments assumption is novel, and in comparison with existing results, we improve the convergence rate of the HCLT under the exponential tails assumption. To compute the critical value, we employ the blockwise bootstrap method. Importantly, our approach does not require the independence of the two samples, making it applicable for detecting change points in high-dimensional time series. Numerical results emphasize the effectiveness and advantages of our method.
]]>Entropy doi: 10.3390/e26030225
Authors: Fabio Anza James P. Crutchfield
Any given density matrix can be represented as an infinite number of ensembles of pure states. This leads to the natural question of how to uniquely select one out of the many, apparently equally-suitable, possibilities. Following Jaynes’ information-theoretic perspective, this can be framed as an inference problem. We propose the Maximum Geometric Quantum Entropy Principle to exploit the notions of Quantum Information Dimension and Geometric Quantum Entropy. These allow us to quantify the entropy of fully arbitrary ensembles and select the one that maximizes it. After formulating the principle mathematically, we give the analytical solution to the maximization problem in a number of cases and discuss the physical mechanism behind the emergence of such maximum entropy ensembles.
]]>Entropy doi: 10.3390/e26030224
Authors: Kani Song Linlin Chen Hengyou Wang
Image captioning is important for improving the intelligence of construction projects and assisting managers in mastering construction site activities. However, there are few image-captioning models for construction scenes at present, and the existing methods do not perform well in complex construction scenes. According to the characteristics of construction scenes, we label a text description dataset based on the MOCS dataset and propose a style-enhanced Transformer for image captioning in construction scenes, simply called SETCAP. Specifically, we extract the grid features using the Swin Transformer. Then, to enhance the style information, we not only use the grid features as the initial detail semantic features but also extract style information by style encoder. In addition, in the decoder, we integrate the style information into the text features. The interaction between the image semantic information and the text features is carried out to generate content-appropriate sentences word by word. Finally, we add the sentence style loss into the total loss function to make the style of generated sentences closer to the training set. The experimental results show that the proposed method achieves encouraging results on both the MSCOCO and the MOCS datasets. In particular, SETCAP outperforms state-of-the-art methods by 4.2% CIDEr scores on the MOCS dataset and 3.9% CIDEr scores on the MSCOCO dataset, respectively.
]]>Entropy doi: 10.3390/e26030223
Authors: Philippe Jacquet
In this paper, we analyse the genome sequence of COVID-19 on a information point of view, and we compare that with past and present genomes. We use the powerful tool of joint complexity in order to quantify the similarities measured between the various potential parent genomes. The tool has a computing complexity of several orders of magnitude below the classic Smith–Waterman algorithm and would allow it to be used on a larger scale.
]]>Entropy doi: 10.3390/e26030222
Authors: Yulin Mao Jianghui Xin Liguo Zang Jing Jiao Cheng Xue
Aiming at the difficult problem of extracting fault characteristics and the low accuracy of fault diagnosis throughout the full life cycle of rolling bearings, a fault diagnosis method for rolling bearings based on grey relation degree is proposed in this paper. Firstly, the subtraction-average-based optimizer is used to optimize the parameters of the variational mode decomposition algorithm. Secondly, the vibration signals of bearings are decomposed by using the optimized results, and the feature vector of the intrinsic mode function component corresponding to the minimum envelope entropy is extracted. Finally, the grey proximity and similarity relation degree based on standard distance entropy are weighted to calculate the grey comprehensive relation degree between the feature vector of vibration signals and each standard state. By comparing the results, the diagnosis of different fault states and degrees of rolling bearings is realized. The XJTU-SY dataset was used for experimentation, and the results show that the proposed method achieves a diagnostic accuracy of 95.24% and has better diagnosis performance compared to various algorithms. It provides a reference for the fault diagnosis of rolling bearings throughout the full life cycle.
]]>Entropy doi: 10.3390/e26030221
Authors: Leïla Moueddene Arnaldo Donoso Bertrand Berche
In this note, we revisit the scaling relations among “hatted critical exponents”, which were first derived by Ralph Kenna, Des Johnston, and Wolfhard Janke, and we propose an alternative derivation for some of them. For the scaling relation involving the behavior of the correlation function, we will propose an alternative form since we believe that the expression is erroneous in the work of Ralph and his collaborators.
]]>Entropy doi: 10.3390/e26030220
Authors: Anna Bryniarska José A. Ramos Mercedes Fernández
Machine learning (ML) methods are increasingly being applied to analyze biological signals. For example, ML methods have been successfully applied to the human electroencephalogram (EEG) to classify neural signals as pathological or non-pathological and to predict working memory performance in healthy and psychiatric patients. ML approaches can quickly process large volumes of data to reveal patterns that may be missed by humans. This study investigated the accuracy of ML methods at classifying the brain’s electrical activity to cognitive events, i.e., event-related brain potentials (ERPs). ERPs are extracted from the ongoing EEG and represent electrical potentials in response to specific events. ERPs were evoked during a visual Go/NoGo task. The Go/NoGo task requires a button press on Go trials and response withholding on NoGo trials. NoGo trials elicit neural activity associated with inhibitory control processes. We compared the accuracy of six ML algorithms at classifying the ERPs associated with each trial type. The raw electrical signals were fed to all ML algorithms to build predictive models. The same raw data were then truncated in length and fitted to multiple dynamic state space models of order nx using a continuous-time subspace-based system identification algorithm. The 4nx numerator and denominator parameters of the transfer function of the state space model were then used as substitutes for the data. Dimensionality reduction simplifies classification, reduces noise, and may ultimately improve the predictive power of ML models. Our findings revealed that all ML methods correctly classified the electrical signal associated with each trial type with a high degree of accuracy, and accuracy remained high after parameterization was applied. We discuss the models and the usefulness of the parameterization.
]]>Entropy doi: 10.3390/e26030219
Authors: Jasleen Kaur Ramandeep S. Johal
We consider an autonomous heat engine in simultaneous contact with a hot and a cold reservoir and describe it within a linear irreversible framework. In a tight-coupling approximation, the rate of entropy generation is effectively written in terms of a single thermal flux that is a homogeneous function of the hot and cold fluxes. The specific algebraic forms of the effective flux are deduced for scenarios containing internal and external irreversibilities for the typical example of a thermoelectric generator.
]]>Entropy doi: 10.3390/e26030218
Authors: Shakera K. Khan Bellie Sivakumar
Catchment classification plays an important role in many applications associated with water resources and environment. In recent years, several studies have applied the concepts of nonlinear dynamics and chaos for catchment classification, mainly using dimensionality measures. The present study explores prediction as a measure for catchment classification, through application of a nonlinear local approximation prediction method. The method uses the concept of phase-space reconstruction of a time series to represent the underlying system dynamics and identifies nearest neighbors in the phase space for system evolution and prediction. The prediction accuracy measures, as well as the optimum values of the parameters involved in the method (e.g., phase space or embedding dimension, number of neighbors), are used for classification. For implementation, the method is applied to daily streamflow data from 218 catchments in Australia, and predictions are made for different embedding dimensions and number of neighbors. The prediction results suggest that phase-space reconstruction using streamflow alone can provide good predictions. The results also indicate that better predictions are achieved for lower embedding dimensions and smaller numbers of neighbors, suggesting possible low dimensionality of the streamflow dynamics. The classification results based on prediction accuracy are found to be useful for identification of regions/stations with higher predictability, which has important implications for interpolation or extrapolation of streamflow data.
]]>Entropy doi: 10.3390/e26030217
Authors: Songbo Xie Daniel Younis Yuhan Mei Joseph H. Eberly
Genuine multipartite entanglement is crucial for quantum information and related technologies, but quantifying it has been a long-standing challenge. Most proposed measures do not meet the “genuine” requirement, making them unsuitable for many applications. In this work, we propose a journey toward addressing this issue by introducing an unexpected relation between multipartite entanglement and hypervolume of geometric simplices, leading to a tetrahedron measure of quadripartite entanglement. By comparing the entanglement ranking of two highly entangled four-qubit states, we show that the tetrahedron measure relies on the degree of permutation invariance among parties within the quantum system. We demonstrate potential future applications of our measure in the context of quantum information scrambling within many-body systems.
]]>Entropy doi: 10.3390/e26030216
Authors: Hongyu Wu Xiaoning Feng Jiale Zhang
The SAND algorithm is a family of lightweight AND-RX block ciphers released by DCC in 2022. Our research focuses on assessing the security of SAND with a quantum computation model. This paper presents the first quantum implementation of SAND (including two versions of SAND, SAND-64 and SAND-128). Considering the depth-times-width metric, the quantum circuit implementation of the SAND algorithm demonstrates a relatively lower consumption of quantum resources than that of the quantum implementations of existing lightweight algorithms. A generalized Grover-based brute-force attack framework was implemented and employed to perform attacks on two versions of the SAND algorithm. This framework utilized the g-database algorithm, which considered different plaintext–ciphertext pairs in a unified manner, reducing quantum resource consumption. Our findings indicate that the SAND-128 algorithm achieved the NIST security level I, while the SAND-64 algorithm fell short of meeting the requirements of security level I.
]]>Entropy doi: 10.3390/e26030213
Authors: Jia-Chen Hua Eun-jin Kim Fei He
In this work, we explore information geometry theoretic measures for characterizing neural information processing from EEG signals simulated by stochastic nonlinear coupled oscillator models for both healthy subjects and Alzheimer’s disease (AD) patients with both eyes-closed and eyes-open conditions. In particular, we employ information rates to quantify the time evolution of probability density functions of simulated EEG signals, and employ causal information rates to quantify one signal’s instantaneous influence on another signal’s information rate. These two measures help us find significant and interesting distinctions between healthy subjects and AD patients when they open or close their eyes. These distinctions may be further related to differences in neural information processing activities of the corresponding brain regions, and to differences in connectivities among these brain regions. Our results show that information rate and causal information rate are superior to their more traditional or established information-theoretic counterparts, i.e., differential entropy and transfer entropy, respectively. Since these novel, information geometry theoretic measures can be applied to experimental EEG signals in a model-free manner, and they are capable of quantifying non-stationary time-varying effects, nonlinearity, and non-Gaussian stochasticity presented in real-world EEG signals, we believe that they can form an important and powerful tool-set for both understanding neural information processing in the brain and the diagnosis of neurological disorders, such as Alzheimer’s disease as presented in this work.
]]>Entropy doi: 10.3390/e26030215
Authors: Lingyu Tang Jun Wang Mengyao Wang Chunyu Zhao
The echo state network (ESN) is a recurrent neural network that has yielded state-of-the-art results in many areas owing to its rapid learning ability and the fact that the weights of input neurons and hidden neurons are fixed throughout the learning process. However, the setting procedure for initializing the ESN’s recurrent structure may lead to difficulties in designing a sound reservoir that matches a specific task. This paper proposes an improved pre-training method to adjust the model’s parameters and topology to obtain an adaptive reservoir for a given application. Two strategies, namely global random selection and ensemble training, are introduced to pre-train the randomly initialized ESN model. Specifically, particle swarm optimization is applied to optimize chosen fixed and global weight values within the network, and the reliability and stability of the pre-trained model are enhanced by employing the ensemble training strategy. In addition, we test the feasibility of the model for time series prediction on six benchmarks and two real-life datasets. The experimental results show a clear enhancement in the ESN learning results. Furthermore, the proposed global random selection and ensemble training strategies are also applied to pre-train the extreme learning machine (ELM), which has a similar training process to the ESN model. Numerical experiments are subsequently carried out on the above-mentioned eight datasets. The experimental findings consistently show that the performance of the proposed pre-trained ELM model is also improved significantly. The suggested two strategies can thus enhance the ESN and ELM models’ prediction accuracy and adaptability.
]]>Entropy doi: 10.3390/e26030214
Authors: Jacek Siódmiak
In the case of certain chemical compounds, especially organic ones, electrons can be delocalized between different atoms within the molecule. These resulting bonds, known as resonance bonds, pose a challenge not only in theoretical descriptions of the studied system but also present difficulties in simulating such systems using molecular dynamics methods. In computer simulations of such systems, it is often common practice to use fractional bonds as an averaged value across equivalent structures, known as a resonance hybrid. This paper presents the results of the analysis of five forms of C60 fullerene polymorphs: one with all bonds being resonance, three with all bonds being integer (singles and doubles in different configurations), one with the majority of bonds being integer (singles and doubles), and ten bonds (within two opposite pentagons) valued at one and a half. The analysis involved the Shannon entropy value for bond length distributions and the eigenfrequency of intrinsic vibrations (first vibrational mode), reflecting the stiffness of the entire structure. The maps of the electrostatic potential distribution around the investigated structures are presented and the dipole moment was estimated. Introducing asymmetry in bond redistribution by incorporating mixed bonds (integer and partial), in contrast to variants with equivalent bonds, resulted in a significant change in the examined observables.
]]>Entropy doi: 10.3390/e26030212
Authors: Matthias Gsänger Volker Hösel Christoph Mohamad-Klotzbach Johannes Müller
A unifying setup for opinion models originating in statistical physics and stochastic opinion dynamics are developed and used to analyze election data. The results are interpreted in the light of political theory. We investigate the connection between Potts (Curie–Weiss) models and stochastic opinion models in the view of the Boltzmann distribution and stochastic Glauber dynamics. We particularly find that the q-voter model can be considered as a natural extension of the Zealot model, which is adapted by Lagrangian parameters. We also discuss weak and strong effects (also called extensive and nonextensive) continuum limits for the models. The results are used to compare the Curie–Weiss model, two q-voter models (weak and strong effects), and a reinforcement model (weak effects) in explaining electoral outcomes in four western democracies (United States, Great Britain, France, and Germany). We find that particularly the weak effects models are able to fit the data (Kolmogorov–Smirnov test) where the weak effects reinforcement model performs best (AIC). Additionally, we show how the institutional structure shapes the process of opinion formation. By focusing on the dynamics of opinion formation preceding the act of voting, the models discussed in this paper give insights both into the empirical explanation of elections as such, as well as important aspects of the theory of democracy. Therefore, this paper shows the usefulness of an interdisciplinary approach in studying real world political outcomes by using mathematical models.
]]>Entropy doi: 10.3390/e26030211
Authors: Shiyu Ouyang Qianlan Bai Hui Feng Bo Hu
The rapid development of cryptocurrencies has led to an increasing severity of money laundering activities. In recent years, leveraging graph neural networks for cryptocurrency fraud detection has yielded promising results. However, many existing methods predominantly focus on node classification, i.e., detecting individual illicit transactions, rather than uncovering behavioral pattern differences among money laundering groups. In this paper, we tackle the challenges presented by the organized, heterogeneous, and noisy nature of Bitcoin money laundering. We propose a novel subgraph-based contrastive learning algorithm for heterogeneous graphs, named Bit-CHetG, to perform money laundering group detection. Specifically, we employ predefined metapaths to construct the homogeneous subgraphs of wallet addresses and transaction records from the address–transaction heterogeneous graph, enhancing our ability to capture heterogeneity. Subsequently, we utilize graph neural networks to separately extract the topological embedding representations of transaction subgraphs and associated address representations of transaction nodes. Lastly, supervised contrastive learning is introduced to reduce the effect of noise, which pulls together the transaction subgraphs with the same class while pushing apart the subgraphs with different classes. By conducting experiments on two real-world datasets with homogeneous and heterogeneous graphs, the Micro F1 Score of our proposed Bit-CHetG is improved by at least 5% compared to others.
]]>Entropy doi: 10.3390/e26030210
Authors: Yang Chen Bowen Shi
Recent years have seen a rise in interest in document-level relation extraction, which is defined as extracting all relations between entities in multiple sentences of a document. Typically, there are multiple mentions corresponding to a single entity in this context. Previous research predominantly employed a holistic representation for each entity to predict relations, but this approach often overlooks valuable information contained in fine-grained entity mentions. We contend that relation prediction and inference should be grounded in specific entity mentions rather than abstract entity concepts. To address this, our paper proposes a two-stage mention-level framework based on an enhanced heterogeneous graph attention network for document-level relation extraction. Our framework employs two different strategies to model intra-sentential and inter-sentential relations between fine-grained entity mentions, yielding local mention representations for intra-sentential relation prediction and global mention representations for inter-sentential relation prediction. For inter-sentential relation prediction and inference, we propose an enhanced heterogeneous graph attention network to better model the long-distance semantic relationships and design an entity-coreference path-based inference strategy to conduct relation inference. Moreover, we introduce a novel cross-entropy-based multilabel focal loss function to address the class imbalance problem and multilabel prediction simultaneously. Comprehensive experiments have been conducted to verify the effectiveness of our framework. Experimental results show that our approach significantly outperforms the existing methods.
]]>Entropy doi: 10.3390/e26030209
Authors: Chao Qu Xiaoyu Chen Qihan Xu Jing Han
The supervised super-resolution (SR) methods based on simple degradation assumptions (e.g., bicubic downsampling) have unsatisfactory generalization ability on real-world thermal images. To enhance the SR effect of real-world sceneries, we introduce an unsupervised SR framework for thermal images, incorporating degradation modeling and corresponding SR. Inspired by the physical prior that high frequency affects details and low frequency affects thermal contrast, we propose a frequency-aware degradation model, named TFADGAN. The model achieves image quality migration between thermal detectors of different resolutions by degrading different frequency components of the image from high-resolution (HR) to low-resolution (LR). Specifically, by adversarial learning with unpaired LR thermal images, the complex degradation processes of HR thermal images at low and high frequencies are modeled separately. Benefiting from the thermal characteristics mined from real-world images, the degraded images generated by TFADGAN are similar to LR thermal ones in terms of detail and contrast. Then, the SR model is trained based on the pseudo-paired data consisting of degraded images and HR images. Extensive experimental results demonstrate that the degraded images generated by TFADGAN provide reliable alternatives to real-world LR thermal images. In real-world thermal image experiments, the proposed SR framework can improve the peak signal-to-noise ratio (PSNR) and structural similarity degree (SSIM) by 1.28 dB and 0.02, respectively.
]]>Entropy doi: 10.3390/e26030208
Authors: Xueyuan Chen Shangzhe Li
Due to the success observed in deep neural networks with contrastive learning, there has been a notable surge in research interest in graph contrastive learning, primarily attributed to its superior performance in graphs with limited labeled data. Within contrastive learning, the selection of a “view” dictates the information captured by the representation, thereby influencing the model’s performance. However, assessing the quality of information in these views poses challenges, and determining what constitutes a good view remains unclear. This paper addresses this issue by establishing the definition of a good view through the application of graph information bottleneck and structural entropy theories. Based on theoretical insights, we introduce CtrlGCL, a novel method for achieving a beneficial view in graph contrastive learning through coding tree representation learning. Extensive experiments were conducted to ascertain the effectiveness of the proposed view in unsupervised and semi-supervised learning. In particular, our approach, via CtrlGCL-H, yields an average accuracy enhancement of 1.06% under unsupervised learning when compared to GCL. This improvement underscores the efficacy of our proposed method.
]]>Entropy doi: 10.3390/e26030207
Authors: Jianmin Yi Hao Wu Ying Guo
Building an underwater quantum network is necessary for various applications such as ocean exploration, environmental monitoring, and national defense. Motivated by characteristics of the oceanic turbulence channel, we suggest a machine learning approach to predicting the channel characteristics of continuous variable (CV) quantum key distribution (QKD) in challenging seawater environments. We consider the passive continuous variable (CV) measurement-device-independent (MDI) QKD in oceanic scenarios, since the passive-state preparation scheme offers simpler linear elements for preparation, resulting in reduced interaction with the practical environment. To provide a practical reference for underwater quantum communications, we suggest a prediction of transmittance for the ocean quantum links with a given neural network as an example of machine learning algorithms. The results have a good consistency with the real data within the allowable error range; this makes the passive CVQKD more promising for commercialization and implementation.
]]>Entropy doi: 10.3390/e26030206
Authors: Chenglong Hu Ranran Guo
Sustainable development is a practical path to optimize industrial structures and enhance investment efficiency. Investigating risk contagion within ESG industries is a crucial step towards reducing systemic risks and fostering the green evolution of the economy. This research constructs ESG industry indices, taking into account the possibility of extreme tail risks, and employs VaR and CoVaR as measures of tail risk. The TENET network approach is integrated to to capture the structural evolution and direction of information flow among ESG industries, employing information entropy to quantify the topological characteristics of the network model, exploring the risk transmission paths and evolution patterns of ESG industries in an extreme tail risk event. Finally, Mantel tests are conducted to examine the existence of significant risk spillover effects between ESG and traditional industries. The research finds strong correlations among ESG industry indices during stock market crash, Sino–US trade frictions, and the COVID-19 pandemic, with industries such as the COAL, CMP, COM, RT, and RE playing key roles in risk transmission within the network, transmitting risks to other industries. Affected by systemic risk, the information entropy of the TENET network significantly decreases, reducing market information uncertainty and leading market participants to adopt more uniform investment strategies, thus diminishing the diversity of market behaviors. ESG industries show resilience in the face of extreme risks, demonstrating a lack of significant risk contagion with traditional industries.
]]>Entropy doi: 10.3390/e26030205
Authors: Hutuo Quan Huicheng Lai Guxue Gao Jun Ma Junkai Li Dongji Chen
Human–object interaction (HOI) detection aims to localize and recognize the relationship between humans and objects, which helps computers understand high-level semantics. In HOI detection, two-stage and one-stage methods have distinct advantages and disadvantages. The two-stage methods can obtain high-quality human–object pair features based on object detection but lack contextual information. The one-stage transformer-based methods can model good global features but cannot benefit from object detection. The ideal model should have the advantages of both methods. Therefore, we propose the Pairwise Convolutional neural network (CNN)-Transformer (PCT), a simple and effective two-stage method. The model both fully utilizes the object detector and has rich contextual information. Specifically, we obtain pairwise CNN features from the CNN backbone. These features are fused with pairwise transformer features to enhance the pairwise representations. The enhanced representations are superior to using CNN and transformer features individually. In addition, the global features of the transformer provide valuable contextual cues. We fairly compare the performance of pairwise CNN and pairwise transformer features in HOI detection. The experimental results show that the previously neglected CNN features still have a significant edge. Compared to state-of-the-art methods, our model achieves competitive results on the HICO-DET and V-COCO datasets.
]]>Entropy doi: 10.3390/e26030204
Authors: D. Y. Charcon L. H. A. Monteiro
The Ultimatum Game is a simplistic representation of bargaining processes occurring in social networks. In the standard version of this game, the first player, called the proposer, makes an offer on how to split a certain amount of money. If the second player, called the responder, accepts the offer, the money is divided according to the proposal; if the responder declines the offer, both players receive no money. In this article, an agent-based model is employed to evaluate the performance of five distinct strategies of playing a modified version of this game. A strategy corresponds to instructions on how a player must act as the proposer and as the responder. Here, the strategies are inspired by the following basic emotions: anger, fear, joy, sadness, and surprise. Thus, in the game, each interacting agent is a player endowed with one of these five basic emotions. In the modified version explored in this article, the spatial dimension is taken into account and the survival of the players depends on successful negotiations. Numerical simulations are performed in order to determine which basic emotion dominates the population in terms of prevalence and accumulated money. Information entropy is also computed to assess the time evolution of population diversity and money distribution. From the obtained results, a conjecture on the emergence of the sense of fairness is formulated.
]]>Entropy doi: 10.3390/e26030203
Authors: Cameron Witkowski Stephen Brown Kevin Truong
We present a modified version of the Szilard engine, demonstrating that an explicit measurement procedure is entirely unnecessary for its operation. By considering our modified engine, we are able to provide a new interpretation of Landauer’s original argument for the cost of erasure. From this view, we demonstrate that a reset operation is strictly impossible in a dynamical system with only conservative forces. Then, we prove that approaching a reset yields an unavoidable instability at the reset point. Finally, we present an original proof of Landauer’s principle that is completely independent from the Second Law of thermodynamics.
]]>Entropy doi: 10.3390/e26030201
Authors: Artem Romanenko Vitaly Vanchurin
We developed a macroscopic description of the evolutionary dynamics by following the temporal dynamics of the total Shannon entropy of sequences, denoted by S, and the average Hamming distance between them, denoted by H. We argue that a biological system can persist in the so-called quasi-equilibrium state for an extended period, characterized by strong correlations between S and H, before undergoing a phase transition to another quasi-equilibrium state. To demonstrate the results, we conducted a statistical analysis of SARS-CoV-2 data from the United Kingdom during the period between March 2020 and December 2023. From a purely theoretical perspective, this allowed us to systematically study various types of phase transitions described by a discontinuous change in the thermodynamic parameters. From a more-practical point of view, the analysis can be used, for example, as an early warning system for pandemics.
]]>Entropy doi: 10.3390/e26030202
Authors: Ti-Wei Xue Zeng-Yuan Guo
The Kelvin relation, relating the Seebeck coefficient and the Peltier coefficient, is a theoretical basis of thermoelectricity. It was first derived by Kelvin using a quasi-thermodynamic approach. However, Kelvin’s approach was subjected to much criticism due to the rude neglect of irreversible factors. It was only later that a seemingly plausible proof of the Kelvin relation was given using the Onsager reciprocal relation with full consideration of irreversibility. Despite this, a critical issue remains. It is believed that the Seebeck and Peltier effects are thermodynamically reversible, and therefore, the Kelvin relation should also be independent of irreversibility. Kelvin’s quasi-thermodynamic approach, although seemingly irrational, may well have touched on the essence of thermoelectricity. To avoid Kelvin’s dilemma, this study conceives the physical scenarios of equilibrium thermodynamics to explore thermoelectricity. Unlike Kelvin’s quasi-thermodynamic approach, here, a completely reversible thermodynamic approach is used to establish the reciprocal relations of thermoelectricity, on the basis of which the Kelvin relation is once again derived. Moreover, a direct thermodynamic derivation of the Onsager reciprocal relations for fluxes defined as the time derivative of an extensive state variable is given using the method of equilibrium thermodynamics. The present theory can be extended to other coupled phenomena.
]]>Entropy doi: 10.3390/e26030200
Authors: Ruofan Qiu Xinyuan Yang Yue Bao Yancheng You Hua Jin
A shock wave is a flow phenomenon that needs to be considered in the development of high-speed aircraft and engines. The traditional computational fluid dynamics (CFD) method describes it from the perspective of macroscopic variables, such as the Mach number, pressure, density, and temperature. The thickness of the shock wave is close to the level of the molecular free path, and molecular motion has a strong influence on the shock wave. According to the analysis of the Chapman-Enskog approach, the nonequilibrium effect is the source term that causes the fluid system to deviate from the equilibrium state. The nonequilibrium effect can be used to obtain a description of the physical characteristics of shock waves that are different from the macroscopic variables. The basic idea of the nonequilibrium effect approach is to obtain the nonequilibrium moment of the molecular velocity distribution function by solving the Boltzmann–Bhatnagar–Gross–Krook (Boltzmann BGK) equations or multiple relaxation times Boltzmann (MRT-Boltzmann) equations and to explore the nonequilibrium effect near the shock wave from the molecular motion level. This article introduces the theory and understanding of the nonequilibrium effect approach and reviews the research progress of nonequilibrium behavior in shock-related flow phenomena. The role of nonequilibrium moments played on the macroscopic governing equations of fluids is discussed, the physical meaning of nonequilibrium moments is given from the perspective of molecular motion, and the relationship between nonequilibrium moments and equilibrium moments is analyzed. Studies on the nonequilibrium effects of shock problems, such as the Riemann problem, shock reflection, shock wave/boundary layer interaction, and detonation wave, are introduced. It reveals the nonequilibrium behavior of the shock wave from the mesoscopic level, which is different from the traditional macro perspective and shows the application potential of the mesoscopic kinetic approach of the nonequilibrium effect in the shock problem.
]]>Entropy doi: 10.3390/e26030199
Authors: Shimiao Tang Jiarong Li Haijun Jiang Jinling Wang
This paper concerns a class of coupled competitive neural networks, subject to disturbance and discontinuous activation functions. To realize the fixed-time quasi-bipartite synchronization, an aperiodic intermittent controller is initially designed. Subsequently, by combining the fixed-time stability theory and nonsmooth analysis, several criteria are established to ensure the bipartite synchronization in fixed time. Moreover, synchronization error bounds and settling time estimates are provided. Finally, numerical simulations are presented to verify the main results.
]]>Entropy doi: 10.3390/e26030198
Authors: Marcin Nowakowski
In this paper, we focus on the underlying quantum structure of temporal correlations and show their peculiar nature which differentiates them from spatial quantum correlations. With a growing interest in the representation of quantum states as topological objects, we consider quantum history bundles based on the temporal manifold and show the source of the violation of monogamous temporal Bell-like inequalities. We introduce definitions for the mixture of quantum histories and consider their entanglement as sections over the Hilbert vector bundles. As a generalization of temporal Bell-like inequalities, we derive the quantum bound for multi-time Bell-like inequalities.
]]>Entropy doi: 10.3390/e26030197
Authors: Ewa Roszkowska Marzena Filipowicz-Chomko Anna Łyczkowska-Hanćkowiak Elżbieta Majewska
One of the crucial steps in the multi-criteria decision analysis involves establishing the importance of criteria and determining the relationship between them. This paper proposes an extended Hellwig’s method (H_EM) that utilizes entropy-based weights and Mahalanobis distance to address this issue. By incorporating the concept of entropy, weights are determined based on their information content represented by the matrix data. The Mahalanobis distance is employed to address interdependencies among criteria, contributing to the improved performance of the proposed framework. To illustrate the relevance and effectiveness of the extended H_EM method, this study utilizes it to assess the progress toward achieving Sustainable Development Goal 4 of the 2030 Agenda within the European Union countries for education in the year 2021. Performance comparison is conducted between results obtained by the extended Hellwig’s method and its other variants. The results reveal a significant impact on the ranking of the EU countries in the education area, depending on the choice of distance measure (Euclidean or Mahalanobis) and the system of weights (equal or entropy-based). Overall, this study highlights the potential of the proposed method in addressing complex decision-making scenarios with interdependent criteria.
]]>Entropy doi: 10.3390/e26030196
Authors: Huayong Zhang Xiaotong Yuan Hengchao Zou Lei Zhao Zhongyu Wang Fenglu Guo Zhao Liu
The insect predator–prey system mediates several feedback mechanisms which regulate species abundance and spatial distribution. However, the spatiotemporal dynamics of such discrete systems with the refuge effect remain elusive. In this study, we analyzed a discrete Holling type II model incorporating the refuge effect using theoretical calculations and numerical simulations, and selected moths with high and low growth rates as two exemplifications. The result indicates that only the flip bifurcation opens the routes to chaos, and the system undergoes four spatiotemporally behavioral patterns (from the frozen random pattern to the defect chaotic diffusion pattern, then the competition intermittency pattern, and finally to the fully developed turbulence pattern). Furthermore, as the refuge effect increases, moths with relatively slower growth rates tend to maintain stability at relatively low densities, whereas moths with relatively faster growth rates can induce chaos and unpredictability on the population. According to the theoretical guidance of this study, the refuge effect can be adjusted to control pest populations effectively, which provides a new theoretical perspective and is a feasible tool for protecting crops.
]]>Entropy doi: 10.3390/e26030195
Authors: Lingyu Zhang Yun Kong Youlong Wu Minquan Cheng
In a hierarchical caching system, a server is connected to multiple mirrors, each of which is connected to a different set of users, and both the mirrors and the users are equipped with caching memories. All the existing schemes focus on single file retrieval, i.e., each user requests one file. In this paper, we consider the linear function retrieval problem, i.e., each user requests a linear combination of files, which includes single file retrieval as a special case. We propose a new scheme that reduces the transmission load of the first hop by jointly utilizing the two layers’ cache memories, and we show that our scheme achieves the optimal load for the second hop in some cases.
]]>Entropy doi: 10.3390/e26030194
Authors: Chris Fields James F. Glazebrook Michael Levin
The ideas of self-observation and self-representation, and the concomitant idea of self-control, pervade both the cognitive and life sciences, arising in domains as diverse as immunology and robotics. Here, we ask in a very general way whether, and to what extent, these ideas make sense. Using a generic model of physical interactions, we prove a theorem and several corollaries that severely restrict applicable notions of self-observation, self-representation, and self-control. We show, in particular, that adding observational, representational, or control capabilities to a meta-level component of a system cannot, even in principle, lead to a complete meta-level representation of the system as a whole. We conclude that self-representation can at best be heuristic, and that self models cannot, in general, be empirically tested by the systems that implement them.
]]>Entropy doi: 10.3390/e26030193
Authors: Frank Nielsen
Exponential families are statistical models which are the workhorses in statistics, information theory, and machine learning, among others. An exponential family can either be normalized subtractively by its cumulant or free energy function, or equivalently normalized divisively by its partition function. Both the cumulant and partition functions are strictly convex and smooth functions inducing corresponding pairs of Bregman and Jensen divergences. It is well known that skewed Bhattacharyya distances between the probability densities of an exponential family amount to skewed Jensen divergences induced by the cumulant function between their corresponding natural parameters, and that in limit cases the sided Kullback–Leibler divergences amount to reverse-sided Bregman divergences. In this work, we first show that the α-divergences between non-normalized densities of an exponential family amount to scaled α-skewed Jensen divergences induced by the partition function. We then show how comparative convexity with respect to a pair of quasi-arithmetical means allows both convex functions and their arguments to be deformed, thereby defining dually flat spaces with corresponding divergences when ordinary convexity is preserved.
]]>Entropy doi: 10.3390/e26030192
Authors: Mengyuan Chen Jilan Liu Ning Zhang Yichao Zheng
With the deepening of the diversification and openness of financial systems, financial vulnerability, as an endogenous attribute of financial systems, becomes an important measurement of financial security. Based on a network analysis, we introduce a network curvature indicator improved by Copula entropy as an innovative metric of financial vulnerability. Compared with the previous network curvature analysis method, the CE-based curvature proposed in this paper can measure market vulnerability and systematic risk with significant advantages.
]]>Entropy doi: 10.3390/e26030191
Authors: Marian Kupczynski
In his article in Science, Nicolas Gisin claimed that quantum correlations emerge from outside space–time. We explainthat they are due to space-time symmetries. This paper is a critical review of metaphysical conclusions found in many recent articles. It advocates the importance of contextuality, Einstein -causality and global symmetries. Bell tests allow only rejecting probabilistic coupling provided by a local hidden variable model, but they do not justify metaphysical speculations about quantum nonlocality and objects which know about each other’s state, even when separated by large distances. The violation of Bell inequalities in physics and in cognitive science can be explained using the notion of Bohr- contextuality. If contextual variables, describing varying experimental contexts, are correctly incorporated into a probabilistic model, then the Bell–CHSH inequalities cannot be proven and nonlocal correlations may be explained in an intuitive way. We also elucidate the meaning of statistical independence assumption incorrectly called free choice, measurement independence or no- conspiracy. Since correlation does not imply causation, the violation of statistical independence should be called contextuality; it does not restrict the experimenter’s freedom of choice. Therefore, contrary to what is believed, closing the freedom-of choice loophole does not close the contextuality loophole.
]]>Entropy doi: 10.3390/e26030190
Authors: Ralph V. Chamberlin
The 2nd law of thermodynamics yields an irreversible increase in entropy until thermal equilibrium is achieved. This irreversible increase is often assumed to require large and complex systems to emerge from the reversible microscopic laws of physics. We test this assumption using simulations and theory of a 1D ring of N Ising spins coupled to an explicit heat bath of N Einstein oscillators. The simplicity of this system allows the exact entropy to be calculated for the spins and the heat bath for any N, with dynamics that is readily altered from reversible to irreversible. We find thermal-equilibrium behavior in the thermodynamic limit, and in systems as small as N=2, but both results require microscopic dynamics that is intrinsically irreversible.
]]>Entropy doi: 10.3390/e26030189
Authors: Gabriel S. Rocha David Wagner Gabriel S. Denicol Jorge Noronha Dirk H. Rischke
Relativistic dissipative fluid dynamics finds widespread applications in high-energy nuclear physics and astrophysics. However, formulating a causal and stable theory of relativistic dissipative fluid dynamics is far from trivial; efforts to accomplish this reach back more than 50 years. In this review, we give an overview of the field and attempt a comparative assessment of (at least most of) the theories for relativistic dissipative fluid dynamics proposed until today and used in applications.
]]>Entropy doi: 10.3390/e26030188
Authors: Said Eddahmani Sihem Mesnager
Vectorial Boolean functions and codes are closely related and interconnected. On the one hand, various requirements of binary linear codes are needed for their theoretical interests but, more importantly, for their practical applications (such as few-weight codes or minimal codes for secret sharing, locally recoverable codes for storage, etc.). On the other hand, various criteria and tables have been introduced to analyse the security of S-boxes that are related to vectorial Boolean functions, such as the Differential Distribution Table (DDT), the Boomerang Connectivity Table (BCT), and the Differential-Linear Connectivity Table (DLCT). In previous years, two new tables have been proposed for which the literature was pretty abundant: the c-DDT to extend the DDT and the c-BCT to extend the BCT. In the same vein, we propose extended concepts to study further the security of vectorial Boolean functions, especially the c-Walsh transform, the c-autocorrelation, and the c-differential-linear uniformity and its accompanying table, the c-Differential-Linear Connectivity Table (c-DLCT). We study the properties of these novel functions at their optimal level concerning these concepts and describe the c-DLCT of the crucial inverse vectorial (Boolean) function case. Finally, we draw new ideas for future research toward linear code designs.
]]>Entropy doi: 10.3390/e26030187
Authors: Jan Lewandowsky Gerhard Bauch
In 1999, Naftali Tishby et al [...]
]]>Entropy doi: 10.3390/e26030186
Authors: Haikun Shang Zhidong Liu Yanlei Wei Shen Zhang
Dissolved gas analysis (DGA) in transformer oil, which analyzes its gas content, is valuable for promptly detecting potential faults in oil-immersed transformers. Given the limitations of traditional transformer fault diagnostic methods, such as insufficient gas characteristic components and a high misjudgment rate for transformer faults, this study proposes a transformer fault diagnosis model based on multi-scale approximate entropy and optimized convolutional neural networks (CNNs). This study introduces an improved sparrow search algorithm (ISSA) for optimizing CNN parameters, establishing the ISSA-CNN transformer fault diagnosis model. The dissolved gas components in the transformer oil are analyzed, and the multi-scale approximate entropy of the gas content under different fault modes is calculated. The computed entropy values are then used as feature parameters for the ISSA-CNN model to derive diagnostic results. Experimental data analysis demonstrates that multi-scale approximate entropy effectively characterizes the dissolved gas components in the transformer oil, significantly improving the diagnostic efficiency. Comparative analysis with BPNN, ELM, and CNNs validates the effectiveness and superiority of the proposed ISSA-CNN diagnostic model across various evaluation metrics.
]]>Entropy doi: 10.3390/e26030185
Authors: Tamás S. Biró András Telcs Antal Jakovác
We explore formal similarities and mathematical transformation formulas between general trace-form entropies and the Gini index, originally used in quantifying income and wealth inequalities. We utilize the notion of gintropy introduced in our earlier works as a certain property of the Lorenz curve drawn in the map of the tail-integrated cumulative population and wealth fractions. In particular, we rediscover Tsallis’ q-entropy formula related to the Pareto distribution. As a novel result, we express the traditional entropy in terms of gintropy and reconstruct further non-additive formulas. A dynamical model calculation of the evolution of Gini index is also presented.
]]>Entropy doi: 10.3390/e26030184
Authors: Hua Wang Jianzhong Cao Jijiang Huang
Low-light image enhancement (LLIE) aims to improve the visual quality of images taken under complex low-light conditions. Recent works focus on carefully designing Retinex-based methods or end-to-end networks based on deep learning for LLIE. However, these works usually utilize pixel-level error functions to optimize models and have difficulty effectively modeling the real visual errors between the enhanced images and the normally exposed images. In this paper, we propose an adaptive dual aggregation network with normalizing flows (ADANF) for LLIE. First, an adaptive dual aggregation encoder is built to fully explore the global properties and local details of the low-light images for extracting illumination-robust features. Next, a reversible normalizing flow decoder is utilized to model real visual errors between enhanced and normally exposed images by mapping images into underlying data distributions. Finally, to further improve the quality of the enhanced images, a gated multi-scale information transmitting module is leveraged to introduce the multi-scale information from the adaptive dual aggregation encoder into the normalizing flow decoder. Extensive experiments on paired and unpaired datasets have verified the effectiveness of the proposed ADANF.
]]>Entropy doi: 10.3390/e26030183
Authors: Manuel A. Matías Astorga Gerardo Herrera Corral
We present a phenomenological framework based on the MIT bag model to estimate the pressure experienced by quarks and gluons inside nucleons. This is accomplished by implementing non-extensive Tsallis statistics for the two-component system. In this model of hadrons, the strong interaction generates correlations effectively described by the q-Tsallis parameter. The resulting hadron pressure exhibits general agreement with recent calculations derived from Lattice QCD. Additionally, we compared this pressure with data extracted from deep virtual Compton scattering experiments and gravitational form factor analyses. The extended bag model provides an alternative interpretation of bag pressure in terms of the q-Tsallis parameter. Consequently, the MIT bag model can be expressed without requiring the inclusion of the bag pressure parameter.
]]>Entropy doi: 10.3390/e26030182
Authors: Nan Hu Peng Han Rui Wang Fuqiang Shi Lichun Chen Hongyi Li
The northeastern margin of the Tibetan Plateau (NE Tibetan Plateau) exhibits active geological structures and has experienced multiple strong earthquakes, with M ≥ 7, throughout history. Particularly noteworthy is the 1920 M81/2 earthquake in the Haiyuan region that occurred a century ago and is documented as one of the deadliest earthquakes. Consequently, analyzing seismic risks in the northeastern margin of the Tibetan Plateau holds significant importance. The b value, a crucial parameter for seismic activity, plays a pivotal role in seismic hazard analyses. This study calculates the spatial b values in this region based on earthquake catalogs since 1970. The study area encompasses several major active faults, and due to variations in b values across different fault types, traditional grid-search methods may introduce significant errors in calculating the spatial b value within complex fault systems. To address this, we employed the hierarchical space–time point–process (HIST-PPM) method proposed by Ogata. This method avoids partitioning earthquake samples, optimizes parameters using Akaike’s Bayesian Information Criterion (ABIC) with entropy maximization, and theoretically allows for a higher spatial resolution and more accurate b value calculations. The results indicate a high spatial heterogeneity in b values within the study area. The northwestern and southeastern regions exhibit higher b values. Along the Haiyuan fault zone, the central rupture zone of the Haiyuan earthquake has relatively higher b values than other regions of this fault zone, which is possibly related to the sufficient release of stress during the main rupture of the Haiyuan earthquake. The b values vary from high in the west to low in the east along the Zhongwei fault. On the West Qinling fault zone, the epicenter of the recent Minxian–Zhangxian earthquake is associated with a low b value. In general, regions with low b values correspond well to areas with moderate–strong seismic events in the past 50 years. The spatial differences in b values may reflect variances in seismic hazards among fault zones and regions within the same fault zone.
]]>Entropy doi: 10.3390/e26030181
Authors: Junpei Xu Anbang Wang Xinhui Zhang Laihong Mo Yuhe Zhang Yuehui Sun Yuwen Qin Yuncai Wang
We propose and experimentally demonstrate a wireless-channel key distribution scheme based on laser synchronization induced by a common wireless random signal. Two semiconductor lasers are synchronized under injection of the drive signal after electrical-optical conversion and emit irregular outputs that are used to generate shared keys. Our proof-of-concept experiment using a complex drive signal achieved a secure key generation rate of up to 150 Mbit/s with a bit error rate below 3.8 × 10−3. Numerical simulation results show that the proposed scheme has the potential to achieve a distribution distance of several hundred meters. It is believed that common-signal-induced laser synchronization paves the way for high-speed wireless physical-layer key distribution.
]]>Entropy doi: 10.3390/e26030180
Authors: Natalia L. Tsizhmovska Leonid M. Martyushev
In this paper, word length in the texts of public speeches by USA and UK politicians is analyzed. More than 300 speeches delivered over the past two hundred years were studied. It is found that the lognormal distribution better describes the distribution of word length than do the Weibull and Poisson distributions, for example. It is shown that the length of words does not change significantly over time (the average value either does not change or slightly decreases, and the mode slightly increases). These results are fundamentally different from those obtained previously for sentence lengths and indicate that, in terms of quantitative linguistic analysis, the word length in politicians’ speech has not evolved over the last 200 years and does not obey the principle of least effort proposed by G. Zipf.
]]>Entropy doi: 10.3390/e26030179
Authors: Rathindra Nath Sen
This article examines Wigner’s view on the unreasonable effectiveness of mathematics in the natural sciences, which was based on Cantor’s claim that ‘mathematics is a free creation of the human mind’. It is contended that Cantor’s claim is not relevant to physics because it was based on his power set construction, which does not preserve neighborhoods of geometrical points. It is pointed out that the physical notion of Einstein causality can be defined on a countably infinite point set M with no predefined mathematical structure on it, and this definition endows M with a Tychonoff topology. Under Shirota’s theorem, M can therefore be embedded as a closed subspace of RJ for some J. While this suggests that the differentiable structure of RJ may follow from the principle of causality, the argument is constrained by the fact that the completion processes (analyzed here in some detail) required for the passage from QJ to RJ remain empirically untestable.
]]>Entropy doi: 10.3390/e26030178
Authors: Yanan Dou Yanqing Liu Xueyan Niu Bo Bai Wei Han Yanlin Geng
The celebrated Blahut–Arimoto algorithm computes the capacity of a discrete memoryless point-to-point channel by alternately maximizing the objective function of a maximization problem. This algorithm has been applied to degraded broadcast channels, in which the supporting hyperplanes of the capacity region are again cast as maximization problems. In this work, we consider general broadcast channels and extend this algorithm to compute inner and outer bounds on the capacity regions. Our main contributions are as follows: first, we show that the optimization problems are max–min problems and that the exchange of minimum and maximum holds; second, we design Blahut–Arimoto algorithms for the maximization part and gradient descent algorithms for the minimization part; third, we provide convergence analysis for both parts. Numerical experiments validate the effectiveness of our algorithms.
]]>Entropy doi: 10.3390/e26030177
Authors: Rodrigo Colnago Contreras Vitor Trevelin Xavier da Silva Igor Trevelin Xavier da Silva Monique Simplicio Viana Francisco Lledo dos Santos Rodrigo Bruno Zanin Erico Fernandes Oliveira Martins Rodrigo Capobianco Guido
Since financial assets on stock exchanges were created, investors have sought to predict their future values. Currently, cryptocurrencies are also seen as assets. Machine learning is increasingly adopted to assist and automate investments. The main objective of this paper is to make daily predictions about the movement direction of financial time series through classification models, financial time series preprocessing methods, and feature selection with genetic algorithms. The target time series are Bitcoin, Ibovespa, and Vale. The methodology of this paper includes the following steps: collecting time series of financial assets; data preprocessing; feature selection with genetic algorithms; and the training and testing of machine learning models. The results were obtained by evaluating the models with the area under the ROC curve metric. For the best prediction models for Bitcoin, Ibovespa, and Vale, values of 0.61, 0.62, and 0.58 were obtained, respectively. In conclusion, the feature selection allowed the improvement of performance in most models, and the input series in the form of percentage variation obtained a good performance, although it was composed of fewer attributes in relation to the other sets tested.
]]>Entropy doi: 10.3390/e26030176
Authors: Liubov A. Markovich Justus Urbanetz Vladimir I. Man’ko
This paper delves into the significance of the tomographic probability density function (pdf) representation of quantum states, shedding light on the special classes of pdfs that can be tomograms. Instead of using wave functions or density operators on Hilbert spaces, tomograms, which are the true pdfs, are used to completely describe the states of quantum systems. Unlike quasi-pdfs, like the Wigner function, tomograms can be analysed using all the tools of classical probability theory for pdf estimation, which can allow a better quality of state reconstruction. This is particularly useful when dealing with non-Gaussian states where the pdfs are multi-mode. The knowledge of the family of distributions plays an important role in the application of both parametric and nonparametric density estimation methods. We show that not all pdfs can play the role of tomograms of quantum states and introduce the conditions that must be fulfilled by pdfs to be “quantum”.
]]>Entropy doi: 10.3390/e26020175
Authors: Shen Tian Bolun Tan Yuchen Lin Tieying Wang Kaiyong Hu
Latent thermal energy storage (LTES) devices can efficiently store renewable energy in thermal form and guarantee a stable-temperature thermal energy supply. The gravity-driven motion melting (GDMM) process improves the overall melting rate for packaged phase-change material (PCM) by constructing an enhanced flow field in the liquid phase. However, due to the complex mechanisms involved in fluid–solid coupling and liquid–solid phase transition, numerical simulation studies that demonstrate physical details are necessary. In this study, a simplified numerical model based on the Eulerian method is proposed. We aimed to introduce a fluid deformation yield stress equation to the “solid phase” based on the Bingham fluid assumption. As a result, fluid–solid coupling and liquid–solid phase transition processes become continuously solvable. The proposed model is validated by the referenced experimental measurements. The enhanced performance of liquid-phase convection and the macroscopic settling of the “solid phase” are numerically analyzed. The results indicate that the enhanced liquid-phase fluidity allows for a stronger heat transfer process than natural convection for the pure liquid phase. The gravity-driven pressure difference is directly proportional to the vertical melting rate, which indicates the feasibility of controlling the pressure difference to improve the melting rate.
]]>Entropy doi: 10.3390/e26020174
Authors: Arieh Ben-Naim
In this article, we start by describing a few “definitions” of the solvation processes, which were used in the literature until about 1980. Then, we choose one of these definitions and show that it has a simple molecular interpretation. This fact led to a new definition of the solvation process and the corresponding thermodynamic quantities. The new measure of the solvation Gibbs energy has a simple interpretation. In addition, the thermodynamic quantities associated with the new solvation process have several other advantages over the older measures. These will be discussed briefly in the third section. In the fourth section, we discuss a few applications of the new solvation process.
]]>Entropy doi: 10.3390/e26020173
Authors: Chengrui Li Wenwen Zhao Hualin Liu Youtao Xue Yuxin Yang Weifang Chen
The issue of hypersonic boundary layer transition prediction is a critical aerodynamic concern that must be addressed during the aerodynamic design process of high-speed vehicles. In this context, we propose an advanced mesoscopic method that couples the gas kinetic scheme (GKS) with the Langtry–Menter transition model, including its three high-speed modification methods, tailored for accurate predictions of high-speed transition flows. The new method incorporates the turbulent kinetic energy term into the Maxwellian velocity distribution function, and it couples the effects of high-speed modifications on turbulent kinetic energy within the computational framework of the GKS solver. This integration elevates both the transition model and its high-speed enhancements to the mesoscopic level, enhancing the method’s predictive capability. The GKS-coupled mesoscopic method is validated through a series of test cases, including supersonic flat plate simulation, multiple hypersonic cone cases, the Hypersonic International Flight Research Experimentation (HIFiRE)-1 flight test, and the HIFiRE-5 case. The computational results obtained from these cases exhibit favorable agreement with experimental data. In comparison with the conventional Godunov method, the new approach encompasses a broader range of physical mechanisms, yielding computational results that closely align with the true physical phenomena and marking a notable elevation in computational fidelity and accuracy. This innovative method potentially satisfies the compelling demand for developing a precise and rapid method for predicting hypersonic boundary layer transition, which can be readily used in engineering applications.
]]>Entropy doi: 10.3390/e26020172
Authors: Yue Wang Jun-Jie Huang
Compound droplets have received increasing attention due to their applications in many several areas, including medicine and materials. Previous works mostly focused on compound droplets on planar surfaces and, as such, the effects of curved walls have not been studied thoroughly. In this paper, the influence of the properties of curved solid wall (including the shape, curvature, and contact angle) on the wetting behavior of compound droplets is explored. The axisymmetric lattice Boltzmann method, based on the conservative phase field formulation for ternary fluids, was used to numerically study the wetting and spreading of a compound droplet of the Janus type on various curved solid walls at large density ratios, focusing on whether the separation of compound droplets occurs. Several types of wall geometries were considered, including a planar wall, a concave wall with constant curvature, and a convex wall with fixed or variable curvature (specifically, a prolate or oblate spheroid). The effects of surface wettability, interfacial angles, and the density ratio (of droplet to ambient fluid) on the wetting process were also explored. In general, it was found that, under otherwise identical conditions, droplet separation tends to happen more likely on more hydrophilic walls, under larger interfacial angles (measured inside the droplet), and at larger density ratios. On convex walls, a larger radius of curvature of the surface near the droplet was found to be helpful to split the Janus droplet. On concave walls, as the radius of curvature increases from a small value, the possibility to observe droplet separation first increases and then decreases. Several phase diagrams on whether droplet separation occurs during the spreading process were produced for different kinds of walls to illustrate the influences of various factors.
]]>Entropy doi: 10.3390/e26020171
Authors: Lamberto Rondoni Vincenzo Di Florio
We review, under a modern light, the conditions that render the Boltzmann equation applicable. These are conditions that permit probability to behave like mass, thereby possessing clear and concrete content, whereas generally, this is not the case. Because science and technology are increasingly interested in small systems that violate the conditions of the Boltzmann equation, probability appears to be the only mathematical tool suitable for treating them. Therefore, Boltzmann’s teachings remain relevant, and the present analysis provides a critical perspective useful for accurately interpreting the results of current applications of statistical mechanics.
]]>Entropy doi: 10.3390/e26020170
Authors: David H. Wolpert Jens Kipper
The epistemic arrow of time is the fact that our knowledge of the past seems to be both of a different kind and more detailed than our knowledge of the future. Just like with the other arrows of time, it has often been speculated that the epistemic arrow arises due to the second law of thermodynamics. In this paper, we investigate the epistemic arrow of time using a fully formal framework. We begin by defining a memory system as any physical system whose present state can provide information about the state of the external world at some time other than the present. We then identify two types of memory systems in our universe, along with an important special case of the first type, which we distinguish as a third type of memory system. We show that two of these types of memory systems are time-symmetric, able to provide knowledge about both the past and the future. However, the third type of memory systems exploits the second law of thermodynamics, at least in all of its instances in our universe that we are aware of. The result is that in our universe, this type of memory system only ever provides information about the past. We also argue that human memory is of this third type, completing the argument. We end by scrutinizing the basis of the second law itself. This uncovers a previously unappreciated formal problem for common arguments that try to derive the second law from the “Past Hypothesis”, i.e., from the claim that the very early universe was in a state of extremely low entropy. Our analysis is indebted to prior work by one of us but expands and improves upon this work in several respects.
]]>