Next Issue
Volume 27, May
Previous Issue
Volume 27, March
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 4 (April 2025) – 127 articles

Cover Story (view full-size image):  
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 3845 KiB  
Article
Mutual Information Neural-Estimation-Driven Constellation Shaping Design and Performance Analysis
by Xiuli Ji, Qian Wang, Liping Qian and Pooi-Yuen Kam
Entropy 2025, 27(4), 451; https://doi.org/10.3390/e27040451 - 21 Apr 2025
Abstract
The choice of constellations largely affects the performance of both wireless and optical communications. To address increasing capacity requirements, constellation shaping, especially for high-order modulations, is imperative in high-speed coherent communication systems. This paper, thus, proposes novel mutual information neural estimation (MINE)-based geometric, [...] Read more.
The choice of constellations largely affects the performance of both wireless and optical communications. To address increasing capacity requirements, constellation shaping, especially for high-order modulations, is imperative in high-speed coherent communication systems. This paper, thus, proposes novel mutual information neural estimation (MINE)-based geometric, probabilistic, and joint constellation shaping schemes, i.e., the MINE-GCS, MINE-PCS, and MINE-JCS, to maximize mutual information (MI) via emerging deep learning (DL) techniques. Innovatively, we first introduce the MINE module to effectively estimate and maximize MI through backpropagation, without clear knowledge of the channel state information. Then, we train encoder and probability generator networks with different signal-to-noise ratios to optimize the distribution locations and probabilities of the points, respectively. Note that MINE transforms the precise MI calculation problem into a parameter optimization problem. Our MINE-based schemes only optimize the transmitter end, and avoid the computational and structural complexity in traditional shaping. All the designs were verified through simulations as having superior performance for MI, among which the MINE-JCS undoubtedly performed the best for additive white Gaussian noise, compared to the unshaped QAMs and even the end-to-end training and other DL-based joint shaping schemes. For example, the low-order 8-ary MINE-GCS could achieve an MI gain of about 0.1 bits/symbol compared to the unshaped Star-8QAM. It is worth emphasizing that our proposed schemes achieve a balance between implementation complexity and MI performance, and they are expected to be applied in various practical scenarios with different noise and fading levels in the future. Full article
(This article belongs to the Special Issue Advances in Modern Channel Coding)
Show Figures

Figure 1

16 pages, 3473 KiB  
Article
Information Theory Quantifiers in Cryptocurrency Time Series Analysis
by Micaela Suriano, Leonidas Facundo Caram, Cesar Caiafa, Hernán Daniel Merlino and Osvaldo Anibal Rosso
Entropy 2025, 27(4), 450; https://doi.org/10.3390/e27040450 - 21 Apr 2025
Abstract
This paper investigates the temporal evolution of cryptocurrency time series using information measures such as complexity, entropy, and Fisher information. The main objective is to differentiate between various levels of randomness and chaos. The methodology was applied to 176 daily closing price time [...] Read more.
This paper investigates the temporal evolution of cryptocurrency time series using information measures such as complexity, entropy, and Fisher information. The main objective is to differentiate between various levels of randomness and chaos. The methodology was applied to 176 daily closing price time series of different cryptocurrencies, from October 2015 to October 2024, with more than 30 days of data and not completely null. Complexity–entropy causality plane (CECP) analysis reveals that daily cryptocurrency series with lengths of two years or less exhibit chaotic behavior, while those longer than two years display stochastic behavior. Most longer series resemble colored noise, with the parameter k varying between 0 and 2. Additionally, Natural Language Processing (NLP) analysis identified the most relevant terms in each white paper, facilitating a clustering method that resulted in four distinct clusters. However, no significant characteristics were found across these clusters in terms of the dynamics of the time series. This finding challenges the assumption that project narratives dictate market behavior. For this reason, investment recommendations should prioritize real-time informational metrics over whitepaper content. Full article
Show Figures

Figure 1

17 pages, 885 KiB  
Review
Maximum Entropy Production Principle of Thermodynamics for the Birth and Evolution of Life
by Yasuji Sawada, Yasukazu Daigaku and Kenji Toma
Entropy 2025, 27(4), 449; https://doi.org/10.3390/e27040449 - 21 Apr 2025
Abstract
Research on the birth and evolution of life are reviewed with reference to the maximum entropy production principle (MEPP). It has been shown that this principle is essential for consistent understanding of the birth and evolution of life. First, a recent work for [...] Read more.
Research on the birth and evolution of life are reviewed with reference to the maximum entropy production principle (MEPP). It has been shown that this principle is essential for consistent understanding of the birth and evolution of life. First, a recent work for the birth of a self-replicative system as pre-RNA life is reviewed in relation to the MEPP. A critical condition of polymer concentration in a local system is reported by a dynamical system approach, above which, an exponential increase of entropy production is guaranteed. Secondly, research works of early stage of evolutions are reviewed; experimental research for the numbers of cells necessary for forming a multi-cellular organization, and numerical research of differentiation of a model system and its relation with MEPP. It is suggested by this review article that the late stage of evolution is characterized by formation of society and external entropy production. A hypothesis on the general route of evolution is discussed from the birth to the present life which follows the MEPP. Some examples of life which happened to face poor thermodynamic condition are presented with thermodynamic discussion. It is observed through this review that MEPP is consistently useful for thermodynamic understanding of birth and evolution of life, subject to a thermodynamic condition far from equilibrium. Full article
Show Figures

Figure 1

7 pages, 173 KiB  
Editorial
A Snapshot of Bayesianism
by Mark A. Gannon
Entropy 2025, 27(4), 448; https://doi.org/10.3390/e27040448 - 21 Apr 2025
Abstract
Students are told in basic probability classes that there are two main “schools” of statistics, the frequentist and the Bayesian, and that those different views of how to approach statistical inference problems arise from two different views of the meaning of probability [...] [...] Read more.
Students are told in basic probability classes that there are two main “schools” of statistics, the frequentist and the Bayesian, and that those different views of how to approach statistical inference problems arise from two different views of the meaning of probability [...] Full article
(This article belongs to the Special Issue Bayesianism)
15 pages, 3352 KiB  
Article
Analysis of High-Dimensional Coordination in Human Movement Using Variance Spectrum Scaling and Intrinsic Dimensionality
by Dobromir Dotov, Jingxian Gu, Philip Hotor and Joanna Spyra
Entropy 2025, 27(4), 447; https://doi.org/10.3390/e27040447 - 21 Apr 2025
Abstract
Full-body movement involving multi-segmental coordination has been essential to our evolution as a species, but its study has been focused mostly on the analysis of one-dimensional data. The field is poised for a change by the availability of high-density recording and data sharing. [...] Read more.
Full-body movement involving multi-segmental coordination has been essential to our evolution as a species, but its study has been focused mostly on the analysis of one-dimensional data. The field is poised for a change by the availability of high-density recording and data sharing. New ideas are needed to revive classical theoretical questions such as the organization of the highly redundant biomechanical degrees of freedom and the optimal distribution of variability for efficiency and adaptiveness. In movement science, there are popular methods that up-dimensionalize: they start with one or a few recorded dimensions and make inferences about the properties of a higher-dimensional system. The opposite problem, dimensionality reduction, arises when making inferences about the properties of a low-dimensional manifold embedded inside a large number of kinematic degrees of freedom. We present an approach to quantify the smoothness and degree to which the kinematic manifold of full-body movement is distributed among embedding dimensions. The principal components of embedding dimensions are rank-ordered by variance. The power law scaling exponent of this variance spectrum is a function of the smoothness and dimensionality of the embedded manifold. It defines a threshold value below which the manifold becomes non-differentiable. We verified this approach by showing that the Kuramoto model obeys the threshold when approaching global synchronization. Next, we tested whether the scaling exponent was sensitive to participants’ gait impairment in a full-body motion capture dataset containing short gait trials. Variance scaling was highest in healthy individuals, followed by osteoarthritis patients after hip replacement, and lastly, the same patients before surgery. Interestingly, in the same order of groups, the intrinsic dimensionality increased but the fractal dimension decreased, suggesting a more compact but complex manifold in the healthy group. Thinking about manifold dimensionality and smoothness could inform classic problems in movement science and the exploration of the biomechanics of full-body action. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

22 pages, 4976 KiB  
Article
MambaOSR: Leveraging Spatial-Frequency Mamba for Distortion-Guided Omnidirectional Image Super-Resolution
by Weilei Wen, Qianqian Zhao and Xiuli Shao
Entropy 2025, 27(4), 446; https://doi.org/10.3390/e27040446 - 20 Apr 2025
Abstract
Omnidirectional image super-resolution (ODISR) is critical for VR/AR applications, as high-quality 360° visual content significantly enhances immersive experiences. However, existing ODISR methods suffer from limited receptive fields and high computational complexity, which restricts their ability to model long-range dependencies and extract global structural [...] Read more.
Omnidirectional image super-resolution (ODISR) is critical for VR/AR applications, as high-quality 360° visual content significantly enhances immersive experiences. However, existing ODISR methods suffer from limited receptive fields and high computational complexity, which restricts their ability to model long-range dependencies and extract global structural features. Consequently, these limitations hinder the effective reconstruction of high-frequency details. To address these issues, we propose a novel Mamba-based ODISR network, termed MambaOSR, which consists of three key modules working collaboratively for accurate reconstruction. Specifically, we first introduce a spatial-frequency visual state space model (SF-VSSM) to capture global contextual information via dual-domain representation learning, thereby enhancing the preservation of high-frequency details. Subsequently, we design a distortion-guided module (DGM) that leverages distortion map priors to adaptively model geometric distortions, effectively suppressing artifacts resulting from equirectangular projections. Finally, we develop a multi-scale feature fusion module (MFFM) that integrates complementary features across multiple scales, further improving reconstruction quality. Extensive experiments conducted on the SUN360 dataset demonstrate that our proposed MambaOSR achieves a 0.16 dB improvement in WS-PSNR and increases the mutual information by 1.99% compared with state-of-the-art methods, significantly enhancing both visual quality and the information richness of omnidirectional images. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

49 pages, 1215 KiB  
Review
A Survey on Semantic Communications in Internet of Vehicles
by Sha Ye, Qiong Wu, Pingyi Fan and Qiang Fan
Entropy 2025, 27(4), 445; https://doi.org/10.3390/e27040445 - 20 Apr 2025
Abstract
The Internet of Vehicles (IoV), as the core of intelligent transportation system, enables comprehensive interconnection between vehicles and their surroundings through multiple communication modes, which is significant for autonomous driving and intelligent traffic management. However, with the emergence of new applications, traditional communication [...] Read more.
The Internet of Vehicles (IoV), as the core of intelligent transportation system, enables comprehensive interconnection between vehicles and their surroundings through multiple communication modes, which is significant for autonomous driving and intelligent traffic management. However, with the emergence of new applications, traditional communication technologies face the problems of scarce spectrum resources and high latency. Semantic communication, which focuses on extracting, transmitting, and recovering some useful semantic information from messages, can reduce redundant data transmission, improve spectrum utilization, and provide innovative solutions to communication challenges in the IoV. This paper systematically reviews state-of-the-art semantic communications in the IoV, elaborates the technical background of the IoV and semantic communications, and deeply discusses key technologies of semantic communications in the IoV, including semantic information extraction, semantic communication architecture, resource allocation and management, and so on. Through specific case studies, it demonstrates that semantic communications can be effectively employed in the scenarios of traffic environment perception and understanding, intelligent driving decision support, IoV service optimization, and intelligent traffic management. Additionally, it analyzes the current challenges and future research directions. This survey reveals that semantic communications have broad application prospects in the IoV, but it is necessary to solve the real existing problems by combining advanced technologies to promote their wide application in the IoV and contributing to the development of intelligent transportation systems. Full article
(This article belongs to the Special Issue Semantic Information Theory)
Show Figures

Figure 1

17 pages, 4412 KiB  
Article
Improving the Protection of Step-Down Transformers by Utilizing Percentage Differential Protection and Scale-Dependent Intrinsic Entropy
by Chia-Wei Huang, Chih-Chiang Fang, Wei-Tai Hsu, Chih-Chung Yang and Li-Ting Zhou
Entropy 2025, 27(4), 444; https://doi.org/10.3390/e27040444 - 20 Apr 2025
Viewed by 9
Abstract
Transformer operations are susceptible to both internal and external faults. This study primarily employed software to construct a power system simulation model featuring a step-down transformer. The simulation model comprised three single-phase transformers with ten tap positions at the secondary coil to analyze [...] Read more.
Transformer operations are susceptible to both internal and external faults. This study primarily employed software to construct a power system simulation model featuring a step-down transformer. The simulation model comprised three single-phase transformers with ten tap positions at the secondary coil to analyze internal faults. Additionally, ten fault positions between the power transformer and the load were considered for external fault analysis. The protection scheme incorporated percentage differential protection for both the power transformer and the transmission line, aiming to explore fault characteristics. To mitigate the protection device’s sensitivity issues, the scale-dependent intrinsic entropy method was utilized as a decision support system to minimize power system protection misoperations. The results indicated the effectiveness and practicality of the auxiliary method through comprehensive failure analysis. Full article
(This article belongs to the Special Issue Entropy-Based Fault Diagnosis: From Theory to Applications)
Show Figures

Figure 1

16 pages, 3277 KiB  
Article
A Multi-Index Fusion Adaptive Cavitation Feature Extraction for Hydraulic Turbine Cavitation Detection
by Yi Wang, Feng Li, Mengge Lv, Tianzhen Wang and Xiaohang Wang
Entropy 2025, 27(4), 443; https://doi.org/10.3390/e27040443 - 19 Apr 2025
Viewed by 41
Abstract
Under cavitation conditions, hydraulic turbines can suffer from mechanical damage, which will shorten their useful life and reduce power generation efficiency. Timely detection of cavitation phenomena in hydraulic turbines is critical for ensuring operational reliability and maintaining energy conversion efficiency. However, extracting cavitation [...] Read more.
Under cavitation conditions, hydraulic turbines can suffer from mechanical damage, which will shorten their useful life and reduce power generation efficiency. Timely detection of cavitation phenomena in hydraulic turbines is critical for ensuring operational reliability and maintaining energy conversion efficiency. However, extracting cavitation features is challenging due to strong environmental noise interference and the inherent non-linearity and non-stationarity of a cavitation hydroacoustic signal. A multi-index fusion adaptive cavitation feature extraction and cavitation detection method is proposed to solve the above problems. The number of decomposition layers in the multi-index fusion variational mode decomposition (VMD) algorithm is adaptively determined by fusing multiple indicators related to cavitation characteristics, thus retaining more cavitation information and improving the quality of cavitation feature extraction. Then, the cavitation features are selected based on the frequency characteristics of different degrees of cavitation. In this way, the detection of incipient cavitation and the secondary detection of supercavitation are realized. Finally, the cavitation detection effect was verified using the hydro-acoustic signal collected from a mixed-flow hydro turbine model test stand. The detection accuracy rate and false alarm rate were used as evaluation indicators, and the comparison results showed that the proposed method has high detection accuracy and a low false alarm rate. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 8140 KiB  
Article
RDCRNet: RGB-T Object Detection Network Based on Cross-Modal Representation Model
by Yubin Li, Weida Zhan, Yichun Jiang and Jinxin Guo
Entropy 2025, 27(4), 442; https://doi.org/10.3390/e27040442 - 19 Apr 2025
Viewed by 48
Abstract
RGB-thermal object detection harnesses complementary information from visible and thermal modalities to enhance detection robustness in challenging environments, particularly under low-light conditions. However, existing approaches suffer from limitations due to their heavy dependence on precisely registered data and insufficient handling of cross-modal distribution [...] Read more.
RGB-thermal object detection harnesses complementary information from visible and thermal modalities to enhance detection robustness in challenging environments, particularly under low-light conditions. However, existing approaches suffer from limitations due to their heavy dependence on precisely registered data and insufficient handling of cross-modal distribution disparities. This paper presents RDCRNet, a novel framework incorporating a Cross-Modal Representation Model to effectively address these challenges. The proposed network features a Cross-Modal Feature Remapping Module that aligns modality distributions through statistical normalization and learnable correction parameters, significantly reducing feature discrepancies between modalities. A Cross-Modal Refinement and Interaction Module enables sophisticated bidirectional information exchange via trinity refinement for intra-modal context modeling and cross-attention mechanisms for unaligned feature fusion. Multiscale detection capability is enhanced through a Cross-Scale Feature Integration Module, improving detection performance across various object sizes. To overcome the inherent data scarcity in RGB-T detection, we introduce a self-supervised pretraining strategy that combines masked reconstruction with adversarial learning and semantic consistency loss, effectively leveraging both aligned and unaligned RGB-T samples. Extensive experiments demonstrate that RDCRNet achieves state-of-the-art performance on multiple benchmark datasets while maintaining high computational and storage efficiency, validating its superiority and practical effectiveness in real-world applications. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Figure 1

13 pages, 440 KiB  
Article
A Constrained Talagrand Transportation Inequality with Applications to Rate-Distortion-Perception Theory
by Li Xie, Liangyan Li, Jun Chen, Lei Yu and Zhongshan Zhang
Entropy 2025, 27(4), 441; https://doi.org/10.3390/e27040441 - 19 Apr 2025
Viewed by 61
Abstract
A constrained version of Talagrand’s transportation inequality is established, which reveals an intrinsic connection between the Gaussian distortion-rate-perception functions with limited common randomness under the Kullback–Leibler divergence-based and squared Wasserstein-2 distance-based perception measures. This connection provides an organizational framework for assessing existing bounds [...] Read more.
A constrained version of Talagrand’s transportation inequality is established, which reveals an intrinsic connection between the Gaussian distortion-rate-perception functions with limited common randomness under the Kullback–Leibler divergence-based and squared Wasserstein-2 distance-based perception measures. This connection provides an organizational framework for assessing existing bounds on these functions. In particular, we show that the best-known bounds of Xie et al. are nonredundant when examined through this connection. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory, the Third Edition)
Show Figures

Figure 1

21 pages, 772 KiB  
Article
The Intrinsic Dimension of Neural Network Ensembles
by Francesco Tosti Guerra, Andrea Napoletano and Andrea Zaccaria
Entropy 2025, 27(4), 440; https://doi.org/10.3390/e27040440 - 18 Apr 2025
Viewed by 86
Abstract
In this work, we propose to study the collective behavior of different ensembles of neural networks. These sets define and live on complex manifolds that evolve through training. Each manifold is characterized by its intrinsic dimension, a measure of the variability of the [...] Read more.
In this work, we propose to study the collective behavior of different ensembles of neural networks. These sets define and live on complex manifolds that evolve through training. Each manifold is characterized by its intrinsic dimension, a measure of the variability of the ensemble and, as such, a measure of the impact of the different training strategies. Indeed, higher intrinsic dimension values imply higher variability among the networks and a larger parameter space coverage. Here, we quantify how much the training choices allow the exploration of the parameter space, finding that a random initialization of the parameters is a stronger source of variability than, progressively, data distortion, dropout, and batch shuffle. We then investigate the combinations of these strategies, the parameters involved, and the impact on the accuracy of the predictions, shedding light on the often-underestimated consequences of these training choices. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

20 pages, 896 KiB  
Article
MAB-Based Online Client Scheduling for Decentralized Federated Learning in the IoT
by Zhenning Chen, Xinyu Zhang, Siyang Wang and Youren Wang
Entropy 2025, 27(4), 439; https://doi.org/10.3390/e27040439 - 18 Apr 2025
Viewed by 69
Abstract
Different from conventional federated learning (FL), which relies on a central server for model aggregation, decentralized FL (DFL) exchanges models among edge servers, thus improving the robustness and scalability. When deploying DFL into the Internet of Things (IoT), limited wireless resources cannot provide [...] Read more.
Different from conventional federated learning (FL), which relies on a central server for model aggregation, decentralized FL (DFL) exchanges models among edge servers, thus improving the robustness and scalability. When deploying DFL into the Internet of Things (IoT), limited wireless resources cannot provide simultaneous access to massive devices. One must perform client scheduling to balance the convergence rate and model accuracy. However, the heterogeneity of computing and communication resources across client devices, combined with the time-varying nature of wireless channels, makes it challenging to estimate accurately the delay associated with client participation during the scheduling process. To address this issue, we investigate the client scheduling and resource optimization problem in DFL without prior client information. Specifically, the considered problem is reformulated as a multi-armed bandit (MAB) program, and an online learning algorithm that utilizes contextual multi-arm slot machines for client delay estimation and scheduling is proposed. Through theoretical analysis, this algorithm can achieve asymptotic optimal performance in theory. The experimental results show that the algorithm can make asymptotic optimal client selection decisions, and this method is superior to existing algorithms in reducing the cumulative delay of the system. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

22 pages, 3428 KiB  
Article
Robust Smoothing Cardinalized Probability Hypothesis Density Filter-Based Underwater Multi-Target Direction-of-Arrival Tracking with Uncertain Measurement Noise
by Xinyu Gu, Xianghao Hou, Boxuan Zhang, Yixin Yang and Shuanping Du
Entropy 2025, 27(4), 438; https://doi.org/10.3390/e27040438 - 18 Apr 2025
Viewed by 43
Abstract
In view of the typical multi-target scenarios of underwater direction-of-arrival (DOA) tracking complicated by uncertain measurement noise in unknown underwater environments, a robust underwater multi-target DOA tracking method is proposed by incorporating Saga–Husa (SH) noise estimation and a backward smoothing technique within the [...] Read more.
In view of the typical multi-target scenarios of underwater direction-of-arrival (DOA) tracking complicated by uncertain measurement noise in unknown underwater environments, a robust underwater multi-target DOA tracking method is proposed by incorporating Saga–Husa (SH) noise estimation and a backward smoothing technique within the framework of the cardinalized probability hypothesis density (CPHD) filter. First, the kinematic model of underwater targets and the measurement model based on the received signals of a hydrophone array are established, from which the CPHD-based multi-target DOA tracking algorithm is derived. To mitigate the adverse impact of uncertain measurement noise, the Saga–Husa approach is deployed for dynamic noise estimation, thereby reducing noise-induced performance degradation. Subsequently, a backward smoothing technique is applied to the forward filtering results to further enhance tracking robustness and precision. Finally, extensive simulations and experimental evaluations demonstrate that the proposed method outperforms existing DOA estimation and tracking techniques in terms of robustness and accuracy under uncertain measurement noise conditions. Full article
(This article belongs to the Special Issue Space-Air-Ground-Sea Integrated Communication Networks)
Show Figures

Figure 1

4 pages, 177 KiB  
Editorial
Landauer’s Principle: Past, Present and Future
by Edward Bormashenko
Entropy 2025, 27(4), 437; https://doi.org/10.3390/e27040437 - 18 Apr 2025
Viewed by 187
Abstract
“Thermodynamics is only physical theory of universal content, which I am convinced will never be overthrown, within the framework of applicability of its basic concepts [...] Full article
14 pages, 4928 KiB  
Article
Retina-Inspired Models Enhance Visual Saliency Prediction
by Gang Shen, Wenjun Ma, Wen Zhai, Xuefei Lv, Guangyao Chen and Yonghong Tian
Entropy 2025, 27(4), 436; https://doi.org/10.3390/e27040436 - 18 Apr 2025
Viewed by 153
Abstract
Biologically inspired retinal preprocessing improves visual perception by efficiently encoding and reducing entropy in images. In this study, we introduce a new saliency prediction framework that combines a retinal model with deep neural networks (DNNs) using information theory ideas. By mimicking the human [...] Read more.
Biologically inspired retinal preprocessing improves visual perception by efficiently encoding and reducing entropy in images. In this study, we introduce a new saliency prediction framework that combines a retinal model with deep neural networks (DNNs) using information theory ideas. By mimicking the human retina, our method creates clearer saliency maps with lower entropy and supports efficient computation with DNNs by optimizing information flow and reducing redundancy. We treat saliency prediction as an information maximization problem, where important regions have high information and low local entropy. Tests on several benchmark datasets show that adding the retinal model boosts the performance of various bottom-up saliency prediction methods by better managing information and reducing uncertainty. We use metrics like mutual information and entropy to measure improvements in accuracy and efficiency. Our framework outperforms state-of-the-art models, producing saliency maps that closely match where people actually look. By combining neurobiological insights with information theory—using measures like Kullback–Leibler divergence and information gain—our method not only improves prediction accuracy but also offers a clear, quantitative understanding of saliency. This approach shows promise for future research that brings together neuroscience, entropy, and deep learning to enhance visual saliency prediction. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

19 pages, 862 KiB  
Article
Empirical Study on Fluctuation Theorem for Volatility Cascade Processes in Stock Markets
by Jun-ichi Maskawa
Entropy 2025, 27(4), 435; https://doi.org/10.3390/e27040435 - 17 Apr 2025
Viewed by 121
Abstract
This study investigates the properties of financial markets that arise from the multi-scale structure of volatility, particularly intermittency, by employing robust theoretical tools from nonequilibrium thermodynamics. Intermittency in velocity fields along spatial and temporal axes is a well-known phenomenon in developed turbulence, with [...] Read more.
This study investigates the properties of financial markets that arise from the multi-scale structure of volatility, particularly intermittency, by employing robust theoretical tools from nonequilibrium thermodynamics. Intermittency in velocity fields along spatial and temporal axes is a well-known phenomenon in developed turbulence, with extensive research dedicated to its structures and underlying mechanisms. In turbulence, such intermittency is explained through energy cascades, where energy injected at macroscopic scales is transferred to microscopic scales. Similarly, analogous cascade processes have been proposed to explain the intermittency observed in financial time series. In this work, we model volatility cascade processes in the stock market by applying the framework of stochastic thermodynamics to a Langevin system that describes the dynamics. We introduce thermodynamic concepts such as temperature, heat, work, and entropy into the analysis of financial markets. This framework allows for a detailed investigation of individual trajectories of volatility cascades across longer to shorter time scales. Further, we conduct an empirical study primarily using the normalized average of intraday logarithmic stock prices of the constituent stocks in the FTSE 100 Index listed on the London Stock Exchange (LSE), along with two additional data sets from the Tokyo Stock Exchange (TSE). Our Langevin-based model successfully reproduces the empirical distribution of volatility—defined as the absolute value of the wavelet coefficients across time scales—and the cascade trajectories satisfy the Integral Fluctuation Theorem associated with entropy production. A detailed analysis of the cascade trajectories reveals that, for the LSE data set, volatility cascades from larger to smaller time scales occur in a causal manner along the temporal axis, consistent with known stylized facts of financial time series. In contrast, for the two data sets from the TSE, while similar behavior is observed at smaller time scales, anti-causal behavior emerges at longer time scales. Full article
(This article belongs to the Special Issue Entropy-Based Applications in Sociophysics II)
Show Figures

Figure 1

13 pages, 1239 KiB  
Article
Information Theory in Perception of Form: From Gestalt to Algorithmic Complexity
by Daniel Algom and Daniel Fitousi
Entropy 2025, 27(4), 434; https://doi.org/10.3390/e27040434 - 17 Apr 2025
Viewed by 134
Abstract
In 1948, Claude Shannon published a revolutionary paper on communication and information in engineering, one that made its way into the psychology of perception and changed it for good. However, the path to truly successful applications to psychology has been slow and bumpy. [...] Read more.
In 1948, Claude Shannon published a revolutionary paper on communication and information in engineering, one that made its way into the psychology of perception and changed it for good. However, the path to truly successful applications to psychology has been slow and bumpy. In this article, we present a readable account of that path, explaining the early difficulties as well as the creative solutions offered. The latter include Garner’s theory of sets and redundancy as well as mathematical group theory. These solutions, in turn, enabled rigorous objective definitions to the hitherto subjective Gestalt concepts of figural goodness, order, randomness, and predictability. More recent developments enabled the definition of, in an exact mathematical sense, the key notion of complexity. In this article, we demonstrate, for the first time, the presence of the association between people’s subjective impression of figural goodness and the pattern’s objective complexity. The more attractive the pattern appears to perception, the less complex it is and the smaller the set of subjectively similar patterns. Full article
(This article belongs to the Special Issue Information-Theoretic Principles in Cognitive Systems)
Show Figures

Figure 1

50 pages, 2918 KiB  
Article
Classical Data in Quantum Machine Learning Algorithms: Amplitude Encoding and the Relation Between Entropy and Linguistic Ambiguity
by Jurek Eisinger, Ward Gauderis, Lin de Huybrecht and Geraint A. Wiggins
Entropy 2025, 27(4), 433; https://doi.org/10.3390/e27040433 - 16 Apr 2025
Viewed by 239
Abstract
The Categorical Compositional Distributional (DisCoCat) model has been proven to be very successful in modelling sentence meaning as the interaction of word meanings. Words are modelled as quantum states, interacting guided by grammar. This model of language has been extended to density matrices [...] Read more.
The Categorical Compositional Distributional (DisCoCat) model has been proven to be very successful in modelling sentence meaning as the interaction of word meanings. Words are modelled as quantum states, interacting guided by grammar. This model of language has been extended to density matrices to account for ambiguity in language. Density matrices describe probability distributions over quantum states, and in this work we relate the mixedness of density matrices to ambiguity in the sentences they represent. The von Neumann entropy and the fidelity are used as measures of this mixedness. Via the process of amplitude encoding, we introduce classical data into quantum machine learning algorithms. First, the findings suggest that in quantum natural language processing, amplitude-encoding data onto a quantum computer can be a useful tool to improve the performance of the quantum machine learning models used. Second, the effect that these encoded data have on the above-introduced relation between entropy and ambiguity is investigated. We conclude that amplitude-encoding classical data in quantum machine learning algorithms makes the relation between the entropy of a density matrix and ambiguity in the sentence modelled by this density matrix much more intuitively interpretable. Full article
(This article belongs to the Collection Feature Papers in Information Theory)
Show Figures

Figure 1

17 pages, 1359 KiB  
Article
Quantum Synchronization via Active–Passive Decomposition Configuration: An Open Quantum-System Study
by Nan Yang and Ting Yu
Entropy 2025, 27(4), 432; https://doi.org/10.3390/e27040432 - 16 Apr 2025
Viewed by 81
Abstract
In this paper, we study the synchronization of dissipative quantum harmonic oscillators in the framework of a quantum open system via the active–passive decomposition (APD) configuration. We show that two or more quantum systems may be synchronized when the quantum systems of interest [...] Read more.
In this paper, we study the synchronization of dissipative quantum harmonic oscillators in the framework of a quantum open system via the active–passive decomposition (APD) configuration. We show that two or more quantum systems may be synchronized when the quantum systems of interest are embedded in dissipative environments and influenced by a common classical system. Such a classical system is typically termed a controller, which (1) can drive quantum systems to cross different regimes (e.g., from periodic to chaotic motions) and (2) constructs the so-called active–passive decomposition configuration, such that all the quantum objects under consideration may be synchronized. The main finding of this paper is that we demonstrate that the complete synchronizations measured using the standard quantum deviation may be achieved for both stable regimes (quantum limit circles) and unstable regimes (quantum chaotic motions). As an example, we numerically show in an optomechanical setup that complete synchronization can be realized in quantum mechanical resonators. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

21 pages, 8334 KiB  
Article
A Study Based on b-Value and Information Entropy in the 2008 Wenchuan 8.0 Earthquake
by Shasha Liang, Ziqi Wang and Xinyue Wang
Entropy 2025, 27(4), 431; https://doi.org/10.3390/e27040431 - 16 Apr 2025
Viewed by 100
Abstract
Earthquakes, as serious natural disasters, have greatly harmed human beings. In recent years, the combination of acoustic emission technology and information entropy has shown good prospects in earthquake prediction. In this paper, we study the application of acoustic emission b-values and information entropy [...] Read more.
Earthquakes, as serious natural disasters, have greatly harmed human beings. In recent years, the combination of acoustic emission technology and information entropy has shown good prospects in earthquake prediction. In this paper, we study the application of acoustic emission b-values and information entropy in earthquake prediction in China and analyze their changing characteristics and roles. The acoustic emission b-value is based on the Gutenberg–Richter law, which quantifies the relationship between magnitude and occurrence frequency. Lower b-values are usually associated with higher earthquake risks. Meanwhile, information entropy is used to quantify the uncertainty of the system, which can reflect the distribution characteristics of seismic events and their dynamic changes. In this study, acoustic emission data from several stations around the 2008 Wenchuan 8.0 earthquake are selected for analysis. By calculating the acoustic emission b-value and information entropy, the following is found: (1) Both the b-value and information entropy show obvious changes before the main earthquake: during the seismic phase, the acoustic emission b-value decreases significantly, and the information entropy also shows obvious decreasing entropy changes. The b-values of stations AXI and DFU continue to decrease in the 40 days before the earthquake, while the b-values of stations JYA and JMG begin to decrease significantly in the 17 days or so before the earthquake. The information entropy changes in the JJS and YZP stations are relatively obvious, especially for the YZP station, which shows stronger aggregation characteristics of seismic activity. This phenomenon indicates that the regional underground structure is in an extremely unstable state. (2) The stress evolution process of the rock mass is divided into three stages: in the first stage, the rock mass enters a sub-stabilized state about 40 days before the main earthquake; in the second stage, the rupture of the cracks changes from a disordered state to an ordered state, which occurs about 10 days before the earthquake; and in the third stage, the impending destabilization of the entire subsurface structure is predicted, which occurs in a short period before the earthquake. In summary, the combined analysis of the acoustic emission b-value and information entropy provides a novel dual-parameter synergy framework for earthquake monitoring and early warning, enhancing precursor recognition through the coupling of stress evolution and system disorder dynamics. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

13 pages, 5722 KiB  
Article
Entropy-Assisted Quality Pattern Identification in Finance
by Rishabh Gupta, Shivam Gupta, Jaskirat Singh and Sabre Kais
Entropy 2025, 27(4), 430; https://doi.org/10.3390/e27040430 - 16 Apr 2025
Viewed by 171
Abstract
Short-term patterns in financial time series form the cornerstone of many algorithmic trading strategies, yet extracting these patterns reliably from noisy market data remains a formidable challenge. In this paper, we propose an entropy-assisted framework for identifying high-quality, non-overlapping patterns that exhibit consistent [...] Read more.
Short-term patterns in financial time series form the cornerstone of many algorithmic trading strategies, yet extracting these patterns reliably from noisy market data remains a formidable challenge. In this paper, we propose an entropy-assisted framework for identifying high-quality, non-overlapping patterns that exhibit consistent behavior over time. We ground our approach in the premise that historical patterns, when accurately clustered and pruned, can yield substantial predictive power for short-term price movements. To achieve this, we incorporate an entropy-based measure as a proxy for information gain: patterns that lead to high one-sided movements in historical data yet retain low local entropy are more “informative” in signaling future market direction. Compared to conventional clustering techniques such as K-means and Gaussian Mixture Models (GMMs), which often yield biased or unbalanced groupings, our approach emphasizes balance over a forced visual boundary, ensuring that quality patterns are not lost due to over-segmentation. By emphasizing both predictive purity (low local entropy) and historical profitability, our method achieves a balanced representation of Buy and Sell patterns, making it better suited for short-term algorithmic trading strategies. This paper offers an in-depth illustration of our entropy-assisted framework through two case studies on Gold vs. USD and GBPUSD. While these examples demonstrate the method’s potential for extracting high-quality patterns, they do not constitute an exhaustive survey of all possible asset classes. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 1030 KiB  
Article
Semantic Arithmetic Coding Using Synonymous Mappings
by Zijian Liang, Kai Niu, Jin Xu and Ping Zhang
Entropy 2025, 27(4), 429; https://doi.org/10.3390/e27040429 - 15 Apr 2025
Viewed by 141
Abstract
Recent semantic communication methods explore effective ways to expand the communication paradigm and improve the performance of communication systems. Nonetheless, a common problem with these methods is that the essence of semantics is not explicitly pointed out and directly utilized. A new epistemology [...] Read more.
Recent semantic communication methods explore effective ways to expand the communication paradigm and improve the performance of communication systems. Nonetheless, a common problem with these methods is that the essence of semantics is not explicitly pointed out and directly utilized. A new epistemology suggests that synonymity, which is revealed as the fundamental feature of semantics, guides the establishment of semantic information theory from a novel viewpoint. Building on this theoretical basis, this paper proposes a semantic arithmetic coding (SAC) method for semantic lossless compression using intuitive synonymity. By constructing reasonable synonymous mappings and performing arithmetic coding procedures over synonymous sets, SAC can achieve higher compression efficiency for meaning-contained source sequences at the semantic level and approximate the semantic entropy limits. Experimental results on edge texture map compression show a significant improvement in coding efficiency using SAC without semantic losses compared to traditional arithmetic coding, demonstrating its effectiveness. Full article
(This article belongs to the Special Issue Semantic Information Theory)
Show Figures

Figure 1

11 pages, 10823 KiB  
Article
Spread Spectrum Image Watermarking Through Latent Diffusion Model
by Hongfei Wu, Xiaodan Lin and Gewei Tan
Entropy 2025, 27(4), 428; https://doi.org/10.3390/e27040428 - 15 Apr 2025
Viewed by 229
Abstract
The rapid development of diffusion models in image generation and processing has led to significant security concerns. Diffusion models are capable of producing highly realistic images that are indistinguishable from real ones. Although deploying a watermarking system can be a countermeasure to verify [...] Read more.
The rapid development of diffusion models in image generation and processing has led to significant security concerns. Diffusion models are capable of producing highly realistic images that are indistinguishable from real ones. Although deploying a watermarking system can be a countermeasure to verify the ownership or the origin of images, the regeneration attacks arising from diffusion models can easily remove the embedded watermark from the images, without compromising their perceptual quality. Previous watermarking methods that hide watermark information in the carrier image are vulnerable to these newly emergent attacks. To address these challenges, we propose a robust and traceable watermark framework based on the latent diffusion model, where the spread-spectrum watermark is coupled with the diffusion noise to ensure its security and imperceptibility. Since the diffusion model is trained to reduce information entropy from disordered data to restore its true distribution, the transparency of the hidden watermark is guaranteed. Benefiting from the spread spectrum strategy, the decoder structure is no longer needed for watermark extraction, greatly alleviating the training overhead. Additionally, the robustness and transparency are easily controlled by a strength factor, whose operating range is studied in this work. Experimental results demonstrate that our method performs not only against common attacks, but also against regeneration attacks and semantic-based image editing. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

24 pages, 4919 KiB  
Article
Quantum Error Mitigation in Optimized Circuits for Particle-Density Correlations in Real-Time Dynamics of the Schwinger Model
by Domenico Pomarico, Mahul Pandey, Riccardo Cioli, Federico Dell’Anna, Saverio Pascazio, Francesco V. Pepe, Paolo Facchi and Elisa Ercolessi
Entropy 2025, 27(4), 427; https://doi.org/10.3390/e27040427 - 14 Apr 2025
Viewed by 144
Abstract
Quantum computing gives direct access to the study of the real-time dynamics of quantum many-body systems. In principle, it is possible to directly calculate non-equal-time correlation functions, from which one can detect interesting phenomena, such as the presence of quantum scars or dynamical [...] Read more.
Quantum computing gives direct access to the study of the real-time dynamics of quantum many-body systems. In principle, it is possible to directly calculate non-equal-time correlation functions, from which one can detect interesting phenomena, such as the presence of quantum scars or dynamical quantum phase transitions. In practice, these calculations are strongly affected by noise, due to the complexity of the required quantum circuits. As a testbed for the evaluation of the real-time evolution of observables and correlations, the dynamics of the Zn Schwinger model in a one-dimensional lattice is considered. To control the computational cost, we adopt a quantum–classical strategy that reduces the dimensionality of the system by restricting the dynamics to the Dirac vacuum sector and optimizes the embedding into a qubit model by minimizing the number of three-qubit gates. The time evolution of particle-density operators in a non-equilibrium quench protocol is both simulated in a bare noisy condition and implemented on a physical IBM quantum device. In either case, the convergence towards a maximally mixed state is targeted by means of different error mitigation techniques. The evaluation of the particle-density correlation shows a well-performing post-processing error mitigation for properly chosen coupling regimes. Full article
(This article belongs to the Special Issue Entanglement in Quantum Spin Systems)
Show Figures

Figure 1

22 pages, 4988 KiB  
Article
Unsupervised Domain Adaptation Method Based on Relative Entropy Regularization and Measure Propagation
by Lianghao Tan, Zhuo Peng, Yongjia Song, Xiaoyi Liu, Huangqi Jiang, Shubing Liu, Weixi Wu and Zhiyuan Xiang
Entropy 2025, 27(4), 426; https://doi.org/10.3390/e27040426 - 14 Apr 2025
Viewed by 129
Abstract
This paper presents a novel unsupervised domain adaptation (UDA) framework that integrates information-theoretic principles to mitigate distributional discrepancies between source and target domains. The proposed method incorporates two key components: (1) relative entropy regularization, which leverages Kullback–Leibler (KL) divergence to align the predicted [...] Read more.
This paper presents a novel unsupervised domain adaptation (UDA) framework that integrates information-theoretic principles to mitigate distributional discrepancies between source and target domains. The proposed method incorporates two key components: (1) relative entropy regularization, which leverages Kullback–Leibler (KL) divergence to align the predicted label distribution of the target domain with a reference distribution derived from the source domain, thereby reducing prediction uncertainty; and (2) measure propagation, a technique that transfers probability mass from the source domain to generate pseudo-measures—estimated probabilistic representations—for the unlabeled target domain. This dual mechanism enhances both global feature alignment and semantic consistency across domains. Extensive experiments on benchmark datasets (OfficeHome and DomainNet) demonstrate that the proposed approach consistently outperforms State-of-the-Art methods, particularly in scenarios with significant domain shifts. These results confirm the robustness, scalability, and theoretical grounding of our framework, offering a new perspective on the fusion of information theory and domain adaptation. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

19 pages, 306 KiB  
Review
The Significance of the Entropic Measure of Time in Natural Sciences
by Leonid M. Martyushev
Entropy 2025, 27(4), 425; https://doi.org/10.3390/e27040425 - 14 Apr 2025
Viewed by 179
Abstract
The review presents arguments emphasizing the importance of using the entropic measure of time (EMT) in the study of irreversible evolving systems. The possibilities of this measure for obtaining the laws of system evolution are shown. It is demonstrated that EMT provides a [...] Read more.
The review presents arguments emphasizing the importance of using the entropic measure of time (EMT) in the study of irreversible evolving systems. The possibilities of this measure for obtaining the laws of system evolution are shown. It is demonstrated that EMT provides a novel and unified perspective on the principle of maximum entropy production (MEPP), which is established in the physics of irreversible processes, as well as on the laws of growth and evolution proposed in biology. Essentially, for irreversible processes, the proposed approach allows, in a certain sense, to identify concepts such as the duration of existence, MEPP, and natural selection. EMT has been used to generalize prior results, indicating that the intrinsic time of a system is logarithmically dependent on extrinsic (Newtonian) time. Full article
(This article belongs to the Section Time)
24 pages, 612 KiB  
Article
Quasi-Optimal Path Convergence-Aided Automorphism Ensemble Decoding of Reed–Muller Codes
by Kairui Tian, He Sun, Yukai Liu and Rongke Liu
Entropy 2025, 27(4), 424; https://doi.org/10.3390/e27040424 - 14 Apr 2025
Viewed by 139
Abstract
By exploiting the rich automorphisms of Reed–Muller (RM) codes, the recently developed automorphism ensemble (AE) successive cancellation (SC) decoder achieves a near-maximum-likelihood (ML) performance for short block lengths. However, the appealing performance of AE-SC decoding arises from the diversity gain that requires a [...] Read more.
By exploiting the rich automorphisms of Reed–Muller (RM) codes, the recently developed automorphism ensemble (AE) successive cancellation (SC) decoder achieves a near-maximum-likelihood (ML) performance for short block lengths. However, the appealing performance of AE-SC decoding arises from the diversity gain that requires a list of SC decoding attempts, which results in a high decoding complexity. To address this issue, this paper proposes a novel quasi-optimal path convergence (QOPC)-aided early termination (ET) technique for AE-SC decoding. This technique detects strong convergence between the partial path metrics (PPMs) of SC constituent decoders to reliably identify the optimal decoding path at runtime. When the QOPC-based ET criterion is satisfied during the AE-SC decoding, only the identified path is allowed to proceed for a complete codeword estimate, while the remaining paths are terminated early. The numerical results demonstrated that for medium-to-high-rate RM codes in the short-length regime, the proposed QOPC-aided ET method incurred negligible performance loss when applied to fully parallel AE-SC decoding. Meanwhile, it achieved a complexity reduction that ranged from 35.9% to 47.4% at a target block error rate (BLER) of 103, where it consistently outperformed a state-of-the-art path metric threshold (PMT)-aided ET method. Additionally, under a partially parallel framework of AE-SC decoding, the proposed QOPC-aided ET method achieved a greater complexity reduction that ranged from 81.3% to 86.7% at a low BLER that approached 105 while maintaining a near-ML decoding performance. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory, the Third Edition)
Show Figures

Figure 1

17 pages, 766 KiB  
Article
Preventing Posterior Collapse with DVAE for Text Modeling
by Tianbao Song, Zongyi Huang, Xin Liu and Jingbo Sun
Entropy 2025, 27(4), 423; https://doi.org/10.3390/e27040423 - 14 Apr 2025
Viewed by 178
Abstract
This paper introduces a novel variational autoencoder model termed DVAE to prevent posterior collapse in text modeling. DVAE employs a dual-path architecture within its decoder: path A and path B. Path A makes the direct input of text instances into the decoder, whereas [...] Read more.
This paper introduces a novel variational autoencoder model termed DVAE to prevent posterior collapse in text modeling. DVAE employs a dual-path architecture within its decoder: path A and path B. Path A makes the direct input of text instances into the decoder, whereas path B replaces a subset of word tokens in the text instances with a generic unknown token before their input into the decoder. A stopping strategy is implemented, wherein both paths are concurrently active during the early phases of training. As the model progresses towards convergence, path B is removed. To further refine the performance, a KL weight dropout method is employed, which randomly sets certain dimensions of the KL weight to zero during the annealing process. DVAE compels the latent variables to encode more information about the input texts through path B and fully utilize the expressiveness of the decoder, as well as avoiding the local optimum when path B is active through path A and the stopping strategy. Furthermore, the KL weight dropout method augments the number of active units within the latent variables. Experimental results show the excellent performance of DVAE in density estimation, representation learning, and text generation. Full article
Show Figures

Figure 1

17 pages, 5186 KiB  
Article
Efficient Integer Quantization for Compressed DETR Models
by Peng Liu, Congduan Li, Nanfeng Zhang, Jingfeng Yang and Li Wang
Entropy 2025, 27(4), 422; https://doi.org/10.3390/e27040422 - 13 Apr 2025
Viewed by 203
Abstract
The Transformer-based target detection model, DETR, has powerful feature extraction and recognition capabilities, but its high computational and storage requirements limit its deployment on resource-constrained devices. To solve this problem, we first replace the ResNet-50 backbone network in DETR with Swin-T, which realizes [...] Read more.
The Transformer-based target detection model, DETR, has powerful feature extraction and recognition capabilities, but its high computational and storage requirements limit its deployment on resource-constrained devices. To solve this problem, we first replace the ResNet-50 backbone network in DETR with Swin-T, which realizes the unification of the backbone network with the Transformer encoder and decoder under the same Transformer processing paradigm. On this basis, we propose a quantized inference scheme based entirely on integers, which effectively serves as a data compression method for reducing memory occupation and computational complexity. Unlike previous approaches that only quantize the linear layer of DETR, we further apply integer approximation to all non-linear operational layers (e.g., Sigmoid, Softmax, LayerNorm, GELU), thus realizing the execution of the entire inference process in the integer domain. Experimental results show that our method reduces the computation and storage to 6.3% and 25% of the original model, respectively, while the average accuracy decreases by only 1.1%, which validates the effectiveness of the method as an efficient and hardware-friendly solution for target detection. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop