Next Issue
Volume 26, February
Previous Issue
Volume 25, December
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 26, Issue 1 (January 2024) – 96 articles

Cover Story (view full-size image): Schrödinger’s essay about a cat in a superposition of dead and alive states presents an inconsistency between quantum mechanics and macroscopic realism, which posits the cat to be dead or alive prior to observation. The paradox is clarified by examining two entangled “cats”. The “cats” are fields; the spin up and down states are macroscopically distinct coherent states, and the measurement settings are adjusted using nonlinear interactions. Quantum mechanics predicts a violation of a Bell inequality, hence falsifying “deterministic macroscopic realism”. However, by distinguishing between the cat-systems before and after the setting interactions, and analyzing the Q-function, one finds consistency with a subset of the Einstein–Podolsky–Rosen premises, meaning that “weak macroscopic realism” is preserved. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31 pages, 38115 KiB  
Article
FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge Distillation
by Leiming Chen, Weishan Zhang, Cihao Dong, Dehai Zhao, Xingjie Zeng, Sibo Qiao, Yichang Zhu and Chee Wei Tan
Entropy 2024, 26(1), 96; https://doi.org/10.3390/e26010096 - 22 Jan 2024
Viewed by 1036
Abstract
Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based [...] Read more.
Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based on its environment, making it difficult to perform federated learning in a heterogeneous model environment. Some knowledge distillation methods address the problem of heterogeneous model fusion to some extent. However, these methods assume that each client is trustworthy. Some clients may produce malicious or low-quality knowledge, making it difficult to aggregate trustworthy knowledge in a heterogeneous environment. To address these challenges, we propose a trustworthy heterogeneous federated learning framework (FedTKD) to achieve client identification and trustworthy knowledge fusion. Firstly, we propose a malicious client identification method based on client logit features, which can exclude malicious information in fusing global logit. Then, we propose a selectivity knowledge fusion method to achieve high-quality global logit computation. Additionally, we propose an adaptive knowledge distillation method to improve the accuracy of knowledge transfer from the server side to the client side. Finally, we design different attack and data distribution scenarios to validate our method. The experiment shows that our method outperforms the baseline methods, showing stable performance in all attack scenarios and achieving an accuracy improvement of 2% to 3% in different data distributions. Full article
Show Figures

Figure 1

23 pages, 501 KiB  
Article
How the Post-Data Severity Converts Testing Results into Evidence for or against Pertinent Inferential Claims
by Aris Spanos
Entropy 2024, 26(1), 95; https://doi.org/10.3390/e26010095 - 22 Jan 2024
Viewed by 714
Abstract
The paper makes a case that the current discussions on replicability and the abuse of significance testing have overlooked a more general contributor to the untrustworthiness of published empirical evidence, which is the uninformed and recipe-like implementation of statistical modeling and inference. It [...] Read more.
The paper makes a case that the current discussions on replicability and the abuse of significance testing have overlooked a more general contributor to the untrustworthiness of published empirical evidence, which is the uninformed and recipe-like implementation of statistical modeling and inference. It is argued that this contributes to the untrustworthiness problem in several different ways, including [a] statistical misspecification, [b] unwarranted evidential interpretations of frequentist inference results, and [c] questionable modeling strategies that rely on curve-fitting. What is more, the alternative proposals to replace or modify frequentist testing, including [i] replacing p-values with observed confidence intervals and effects sizes, and [ii] redefining statistical significance, will not address the untrustworthiness of evidence problem since they are equally vulnerable to [a]–[c]. The paper calls for distinguishing between unduly data-dependant ‘statistical results’, such as a point estimate, a p-value, and accept/reject H0, from ‘evidence for or against inferential claims’. The post-data severity (SEV) evaluation of the accept/reject H0 results, converts them into evidence for or against germane inferential claims. These claims can be used to address/elucidate several foundational issues, including (i) statistical vs. substantive significance, (ii) the large n problem, and (iii) the replicability of evidence. Also, the SEV perspective sheds light on the impertinence of the proposed alternatives [i]–[iii], and oppugns [iii] the alleged arbitrariness of framing H0 and H1 which is often exploited to undermine the credibility of frequentist testing. Full article
Show Figures

Figure 1

21 pages, 800 KiB  
Article
Probabilistic Hesitant Fuzzy Evidence Theory and Its Application in Capability Evaluation of a Satellite Communication System
by Jiahuan Liu, Ping Jian, Desheng Liu and Wei Xiong
Entropy 2024, 26(1), 94; https://doi.org/10.3390/e26010094 - 22 Jan 2024
Viewed by 717
Abstract
Evaluating the capabilities of a satellite communication system (SCS) is challenging due to its complexity and ambiguity. It is difficult to accurately analyze uncertain situations, making it difficult for experts to determine appropriate evaluation values. To address this problem, this paper proposes an [...] Read more.
Evaluating the capabilities of a satellite communication system (SCS) is challenging due to its complexity and ambiguity. It is difficult to accurately analyze uncertain situations, making it difficult for experts to determine appropriate evaluation values. To address this problem, this paper proposes an innovative approach by extending the Dempster-Shafer evidence theory (DST) to the probabilistic hesitant fuzzy evidence theory (PHFET). The proposed approach introduces the concept of probabilistic hesitant fuzzy basic probability assignment (PHFBPA) to measure the degree of support for propositions, along with a combination rule and decision approach. Two methods are developed to generate PHFBPA based on multi-classifier and distance techniques, respectively. In order to improve the consistency of evidence, discounting factors are proposed using an entropy measure and the Jousselme distance of PHFBPA. In addition, a model for evaluating the degree of satisfaction of SCS capability requirements based on PHFET is presented. Experimental classification and evaluation of SCS capability requirements are performed to demonstrate the effectiveness and stability of the PHFET method. By employing the DST framework and probabilistic hesitant fuzzy sets, PHFET provides a compelling solution for handling ambiguous data in multi-source information fusion, thereby improving the evaluation of SCS capabilities. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

16 pages, 2785 KiB  
Article
Continual Reinforcement Learning for Quadruped Robot Locomotion
by Sibo Gai, Shangke Lyu, Hongyin Zhang and Donglin Wang
Entropy 2024, 26(1), 93; https://doi.org/10.3390/e26010093 - 22 Jan 2024
Cited by 1 | Viewed by 1167
Abstract
The ability to learn continuously is crucial for a robot to achieve a high level of intelligence and autonomy. In this paper, we consider continual reinforcement learning (RL) for quadruped robots, which includes the ability to continuously learn sub-sequential tasks (plasticity) and maintain [...] Read more.
The ability to learn continuously is crucial for a robot to achieve a high level of intelligence and autonomy. In this paper, we consider continual reinforcement learning (RL) for quadruped robots, which includes the ability to continuously learn sub-sequential tasks (plasticity) and maintain performance on previous tasks (stability). The policy obtained by the proposed method enables robots to learn multiple tasks sequentially, while overcoming both catastrophic forgetting and loss of plasticity. At the same time, it achieves the above goals with as little modification to the original RL learning process as possible. The proposed method uses the Piggyback algorithm to select protected parameters for each task, and reinitializes the unused parameters to increase plasticity. Meanwhile, we encourage the policy network exploring by encouraging the entropy of the soft network of the policy network. Our experiments show that traditional continual learning algorithms cannot perform well on robot locomotion problems, and our algorithm is more stable and less disruptive to the RL training progress. Several robot locomotion experiments validate the effectiveness of our method. Full article
Show Figures

Figure 1

25 pages, 2760 KiB  
Article
Robust Multi-Dimensional Time Series Forecasting
by Chen Shen, Yong He and Jin Qin
Entropy 2024, 26(1), 92; https://doi.org/10.3390/e26010092 - 22 Jan 2024
Viewed by 929
Abstract
Large-scale and high-dimensional time series data are widely generated in modern applications such as intelligent transportation and environmental monitoring. However, such data contains much noise, outliers, and missing values due to interference during measurement or transmission. Directly forecasting such types of data (i.e., [...] Read more.
Large-scale and high-dimensional time series data are widely generated in modern applications such as intelligent transportation and environmental monitoring. However, such data contains much noise, outliers, and missing values due to interference during measurement or transmission. Directly forecasting such types of data (i.e., anomalous data) can be extremely challenging. The traditional method to deal with anomalies is to cut out the time series with anomalous value entries or replace the data. Both methods may lose important knowledge from the original data. In this paper, we propose a multidimensional time series forecasting framework that can better handle anomalous values: the robust temporal nonnegative matrix factorization forecasting model (RTNMFFM) for multi-dimensional time series. RTNMFFM integrates the autoregressive regularizer into nonnegative matrix factorization (NMF) with the application of the L2,1 norm in NMF. This approach improves robustness and alleviates overfitting compared to standard methods. In addition, to improve the accuracy of model forecasts on severely missing data, we propose a periodic smoothing penalty that keeps the sparse time slices as close as possible to the time slice with high confidence. Finally, we train the model using the alternating gradient descent algorithm. Numerous experiments demonstrate that RTNMFFM provides better robustness and better prediction accuracy. Full article
Show Figures

Figure 1

22 pages, 28284 KiB  
Article
A Multi-Modal Deep-Learning Air Quality Prediction Method Based on Multi-Station Time-Series Data and Remote-Sensing Images: Case Study of Beijing and Tianjin
by Hanzhong Xia, Xiaoxia Chen, Zhen Wang, Xinyi Chen and Fangyan Dong
Entropy 2024, 26(1), 91; https://doi.org/10.3390/e26010091 - 22 Jan 2024
Viewed by 1154
Abstract
The profound impacts of severe air pollution on human health, ecological balance, and economic stability are undeniable. Precise air quality forecasting stands as a crucial necessity, enabling governmental bodies and vulnerable communities to proactively take essential measures to reduce exposure to detrimental pollutants. [...] Read more.
The profound impacts of severe air pollution on human health, ecological balance, and economic stability are undeniable. Precise air quality forecasting stands as a crucial necessity, enabling governmental bodies and vulnerable communities to proactively take essential measures to reduce exposure to detrimental pollutants. Previous research has primarily focused on predicting air quality using only time-series data. However, the importance of remote-sensing image data has received limited attention. This paper proposes a new multi-modal deep-learning model, Res-GCN, which integrates high spatial resolution remote-sensing images and time-series air quality data from multiple stations to forecast future air quality. Res-GCN employs two deep-learning networks, one utilizing the residual network to extract hidden visual information from remote-sensing images, and another using a dynamic spatio-temporal graph convolution network to capture spatio-temporal information from time-series data. By extracting features from two different modalities, improved predictive performance can be achieved. To demonstrate the effectiveness of the proposed model, experiments were conducted on two real-world datasets. The results show that the Res-GCN model effectively extracts multi-modal features, significantly enhancing the accuracy of multi-step predictions. Compared to the best-performing baseline model, the multi-step prediction’s mean absolute error, root mean square error, and mean absolute percentage error increased by approximately 6%, 7%, and 7%, respectively. Full article
(This article belongs to the Topic Complex Systems and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 1242 KiB  
Perspective
Neural Geometrodynamics, Complexity, and Plasticity: A Psychedelics Perspective
by Giulio Ruffini, Edmundo Lopez-Sola, Jakub Vohryzek and Roser Sanchez-Todo
Entropy 2024, 26(1), 90; https://doi.org/10.3390/e26010090 - 22 Jan 2024
Viewed by 3102
Abstract
We explore the intersection of neural dynamics and the effects of psychedelics in light of distinct timescales in a framework integrating concepts from dynamics, complexity, and plasticity. We call this framework neural geometrodynamics for its parallels with general relativity’s description of the interplay [...] Read more.
We explore the intersection of neural dynamics and the effects of psychedelics in light of distinct timescales in a framework integrating concepts from dynamics, complexity, and plasticity. We call this framework neural geometrodynamics for its parallels with general relativity’s description of the interplay of spacetime and matter. The geometry of trajectories within the dynamical landscape of “fast time” dynamics are shaped by the structure of a differential equation and its connectivity parameters, which themselves evolve over “slow time” driven by state-dependent and state-independent plasticity mechanisms. Finally, the adjustment of plasticity processes (metaplasticity) takes place in an “ultraslow” time scale. Psychedelics flatten the neural landscape, leading to heightened entropy and complexity of neural dynamics, as observed in neuroimaging and modeling studies linking increases in complexity with a disruption of functional integration. We highlight the relationship between criticality, the complexity of fast neural dynamics, and synaptic plasticity. Pathological, rigid, or “canalized” neural dynamics result in an ultrastable confined repertoire, allowing slower plastic changes to consolidate them further. However, under the influence of psychedelics, the destabilizing emergence of complex dynamics leads to a more fluid and adaptable neural state in a process that is amplified by the plasticity-enhancing effects of psychedelics. This shift manifests as an acute systemic increase of disorder and a possibly longer-lasting increase in complexity affecting both short-term dynamics and long-term plastic processes. Our framework offers a holistic perspective on the acute effects of these substances and their potential long-term impacts on neural structure and function. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

16 pages, 1497 KiB  
Article
Geometric Phase of a Transmon in a Dissipative Quantum Circuit
by Ludmila Viotti, Fernando C. Lombardo and Paula I. Villar
Entropy 2024, 26(1), 89; https://doi.org/10.3390/e26010089 - 22 Jan 2024
Viewed by 1290
Abstract
Superconducting circuits reveal themselves as promising physical devices with multiple uses. Within those uses, the fundamental concept of the geometric phase accumulated by the state of a system shows up recurrently, as, for example, in the construction of geometric gates. Given this framework, [...] Read more.
Superconducting circuits reveal themselves as promising physical devices with multiple uses. Within those uses, the fundamental concept of the geometric phase accumulated by the state of a system shows up recurrently, as, for example, in the construction of geometric gates. Given this framework, we study the geometric phases acquired by a paradigmatic setup: a transmon coupled to a superconductor resonating cavity. We do so both for the case in which the evolution is unitary and when it is subjected to dissipative effects. These models offer a comprehensive quantum description of an anharmonic system interacting with a single mode of the electromagnetic field within a perfect or dissipative cavity, respectively. In the dissipative model, the non-unitary effects arise from dephasing, relaxation, and decay of the transmon coupled to its environment. Our approach enables a comparison of the geometric phases obtained in these models, leading to a thorough understanding of the corrections introduced by the presence of the environment. Full article
Show Figures

Figure 1

25 pages, 1048 KiB  
Article
An Objective and Robust Bayes Factor for the Hypothesis Test One Sample and Two Population Means
by Israel A. Almodóvar-Rivera and Luis R. Pericchi-Guerra
Entropy 2024, 26(1), 88; https://doi.org/10.3390/e26010088 - 20 Jan 2024
Viewed by 992
Abstract
It has been over 100 years since the discovery of one of the most fundamental statistical tests: the Student’s t test. However, reliable conventional and objective Bayesian procedures are still essential for routine practice. In this work, we proposed an objective and robust [...] Read more.
It has been over 100 years since the discovery of one of the most fundamental statistical tests: the Student’s t test. However, reliable conventional and objective Bayesian procedures are still essential for routine practice. In this work, we proposed an objective and robust Bayesian approach for hypothesis testing for one-sample and two-sample mean comparisons when the assumption of equal variances holds. The newly proposed Bayes factors are based on the intrinsic and Berger robust prior. Additionally, we introduced a corrected version of the Bayesian Information Criterion (BIC), denoted BIC-TESS, which is based on the effective sample size (TESS), for comparing two population means. We studied our developed Bayes factors in several simulation experiments for hypothesis testing. Our methodologies consistently provided strong evidence in favor of the null hypothesis in the case of equal means and variances. Finally, we applied the methodology to the original Gosset sleep data, concluding strong evidence favoring the hypothesis that the average sleep hours differed between the two treatments. These methodologies exhibit finite sample consistency and demonstrate consistent qualitative behavior, proving reasonably close to each other in practice, particularly for moderate to large sample sizes. Full article
(This article belongs to the Special Issue Bayesianism)
Show Figures

Figure 1

19 pages, 1160 KiB  
Article
Degeneracy and Photon Trapping in a Dissipationless Two-Mode Optomechanical Model
by Thiago Alonso Merici, Thiago Gomes De Mattos and José Geraldo Peixoto De Faria
Entropy 2024, 26(1), 87; https://doi.org/10.3390/e26010087 - 19 Jan 2024
Viewed by 717
Abstract
In this work, we theoretically study a finite and undamped two-mode optomechanical model consisting of a high quality optical cavity containing a thin, elastic, and dielectric membrane. The main objective is to investigate the precursors of quantum phase transition in such a model [...] Read more.
In this work, we theoretically study a finite and undamped two-mode optomechanical model consisting of a high quality optical cavity containing a thin, elastic, and dielectric membrane. The main objective is to investigate the precursors of quantum phase transition in such a model by studying the behavior of some observables in the ground state. By controlling the coupling between membrane and modes, we find that the two lowest energy eigenstates become degenerate, as is indicated by the behavior of the mean value of some operators and by other quantifiers as a function of the coupling. Such degenerate states are characterized by a coherent superposition of eigenstates describing one of the two modes preferentially populated and the membrane dislocated from its equilibrium position due the radiation pressure (Schrödinger’s cat states). The delocalization of the compound system photons+membrane results in an increase in fluctuations as measured by Robertson-Schrödinger uncertainty relations. Full article
(This article belongs to the Section Non-equilibrium Phenomena)
Show Figures

Figure 1

29 pages, 505 KiB  
Review
An Information Theoretic Condition for Perfect Reconstruction
by Idris Delsol , Olivier Rioul , Julien Béguinot, Victor Rabiet  and Antoine Souloumiac 
Entropy 2024, 26(1), 86; https://doi.org/10.3390/e26010086 - 19 Jan 2024
Cited by 1 | Viewed by 786
Abstract
A new information theoretic condition is presented for reconstructing a discrete random variable X based on the knowledge of a set of discrete functions of X. The reconstruction condition is derived from Shannon’s 1953 lattice theory with two entropic metrics of Shannon [...] Read more.
A new information theoretic condition is presented for reconstructing a discrete random variable X based on the knowledge of a set of discrete functions of X. The reconstruction condition is derived from Shannon’s 1953 lattice theory with two entropic metrics of Shannon and Rajski. Because such a theoretical material is relatively unknown and appears quite dispersed in different references, we first provide a synthetic description (with complete proofs) of its concepts, such as total, common, and complementary information. The definitions and properties of the two entropic metrics are also fully detailed and shown to be compatible with the lattice structure. A new geometric interpretation of such a lattice structure is then investigated, which leads to a necessary (and sometimes sufficient) condition for reconstructing the discrete random variable X given a set {X1,,Xn} of elements in the lattice generated by X. Intuitively, the components X1,,Xn of the original source of information X should not be globally “too far away” from X in the entropic distance in order that X is reconstructable. In other words, these components should not overall have too low of a dependence on X; otherwise, reconstruction is impossible. These geometric considerations constitute a starting point for a possible novel “perfect reconstruction theory”, which needs to be further investigated and improved along these lines. Finally, this condition is illustrated in five specific examples of perfect reconstruction problems: the reconstruction of a symmetric random variable from the knowledge of its sign and absolute value, the reconstruction of a word from a set of linear combinations, the reconstruction of an integer from its prime signature (fundamental theorem of arithmetic) and from its remainders modulo a set of coprime integers (Chinese remainder theorem), and the reconstruction of the sorting permutation of a list from a minimal set of pairwise comparisons. Full article
(This article belongs to the Special Issue Shannon Entropy: Mathematical View)
Show Figures

Figure 1

16 pages, 1935 KiB  
Article
Identification of Critical Links Based on Electrical Betweenness and Neighborhood Similarity in Cyber-Physical Power Systems
by Jiuling Dong, Zilong Song, Yuanshuo Zheng, Jingtang Luo, Min Zhang, Xiaolong Yang and Hongbing Ma
Entropy 2024, 26(1), 85; https://doi.org/10.3390/e26010085 - 19 Jan 2024
Viewed by 809
Abstract
Identifying critical links is of great importance for ensuring the safety of the cyber-physical power system. Traditional electrical betweenness only considers power flow distribution on the link itself, while ignoring the local influence of neighborhood links and the coupled reaction of information flow [...] Read more.
Identifying critical links is of great importance for ensuring the safety of the cyber-physical power system. Traditional electrical betweenness only considers power flow distribution on the link itself, while ignoring the local influence of neighborhood links and the coupled reaction of information flow on energy flow. An identification method based on electrical betweenness centrality and neighborhood similarity is proposed to consider the internal power flow dynamic influence existing in multi-neighborhood nodes and the topological structure interdependence between power nodes and communication nodes. Firstly, for the power network, the electrical topological overlap is proposed to quantify the vulnerability of the links. This approach comprehensively considers the local contribution of neighborhood nodes, power transmission characteristics, generator capacity, and load. Secondly, in communication networks, effective distance closeness centrality is defined to evaluate the importance of communication links, simultaneously taking into account factors such as the information equipment function and spatial relationships. Next, under the influence of coupled factors, a comprehensive model is constructed based on the dependency relationships between information flow and energy flow to more accurately assess the critical links in the power network. Finally, the simulation results show the effectiveness of the proposed method under dynamic and static attacks. Full article
Show Figures

Figure 1

15 pages, 3124 KiB  
Article
Microchannel Gas Flow in the Multi-Flow Regime Based on the Lattice Boltzmann Method
by Xiaoyu Li, Zhi Ning and Ming Lü
Entropy 2024, 26(1), 84; https://doi.org/10.3390/e26010084 - 18 Jan 2024
Viewed by 824
Abstract
In this work, a lattice Boltzmann method (LBM) for studying microchannel gas flow is developed in the multi-flow regime. In the LBM, by comparing previous studies’ results on effective viscosity in multi-flow regimes, the values of the rarefaction factor applicable to multi-flow regions [...] Read more.
In this work, a lattice Boltzmann method (LBM) for studying microchannel gas flow is developed in the multi-flow regime. In the LBM, by comparing previous studies’ results on effective viscosity in multi-flow regimes, the values of the rarefaction factor applicable to multi-flow regions were determined, and the relationship between relaxation time and Kn number with the rarefaction factor is given. The Kn number is introduced into the second-order slip boundary condition together with the combined bounce-back/specular-reflection (CBBSR) scheme to capture the gas flow in the multi-flow regime. Sensitivity analysis of the dimensionless flow rate to adjustable parameters using the Taguchi method was carried out, and the values of adjustable parameters were determined based on the results of the sensitivity analysis. The results show that the dimensionless flow rate is more sensitive to j than h. Numerical simulations of Poiseuille flow and pulsating flow in a microchannel with second-order slip boundary conditions are carried out to validate the method. The results show that the velocity profile and dimensionless flow rate simulated by the present numerical simulation method in this work are found in the multi-flow regime, and the phenomenon of annular velocity profile in the microchannel is reflected in the phases. Full article
(This article belongs to the Special Issue Mesoscopic Fluid Mechanics)
Show Figures

Figure 1

32 pages, 5447 KiB  
Article
Spatial and Temporal Hierarchy for Autonomous Navigation Using Active Inference in Minigrid Environment
by Daria de Tinguy, Toon Van de Maele, Tim Verbelen and Bart Dhoedt
Entropy 2024, 26(1), 83; https://doi.org/10.3390/e26010083 - 18 Jan 2024
Viewed by 1142
Abstract
Robust evidence suggests that humans explore their environment using a combination of topological landmarks and coarse-grained path integration. This approach relies on identifiable environmental features (topological landmarks) in tandem with estimations of distance and direction (coarse-grained path integration) to construct cognitive maps of [...] Read more.
Robust evidence suggests that humans explore their environment using a combination of topological landmarks and coarse-grained path integration. This approach relies on identifiable environmental features (topological landmarks) in tandem with estimations of distance and direction (coarse-grained path integration) to construct cognitive maps of the surroundings. This cognitive map is believed to exhibit a hierarchical structure, allowing efficient planning when solving complex navigation tasks. Inspired by human behaviour, this paper presents a scalable hierarchical active inference model for autonomous navigation, exploration, and goal-oriented behaviour. The model uses visual observation and motion perception to combine curiosity-driven exploration with goal-oriented behaviour. Motion is planned using different levels of reasoning, i.e., from context to place to motion. This allows for efficient navigation in new spaces and rapid progress toward a target. By incorporating these human navigational strategies and their hierarchical representation of the environment, this model proposes a new solution for autonomous navigation and exploration. The approach is validated through simulations in a mini-grid environment. Full article
Show Figures

Figure 1

13 pages, 793 KiB  
Article
Slope Entropy Characterisation: An Asymmetric Approach to Threshold Parameters Role Analysis
by Mahdy Kouka, David Cuesta-Frau and Vicent Moltó-Gallego
Entropy 2024, 26(1), 82; https://doi.org/10.3390/e26010082 - 18 Jan 2024
Viewed by 855
Abstract
Slope Entropy (SlpEn) is a novel method recently proposed in the field of time series entropy estimation. In addition to the well-known embedded dimension parameter, m, used in other methods, it applies two additional thresholds, denoted as δ and γ, to [...] Read more.
Slope Entropy (SlpEn) is a novel method recently proposed in the field of time series entropy estimation. In addition to the well-known embedded dimension parameter, m, used in other methods, it applies two additional thresholds, denoted as δ and γ, to derive a symbolic representation of a data subsequence. The original paper introducing SlpEn provided some guidelines for recommended specific values of these two parameters, which have been successfully followed in subsequent studies. However, a deeper understanding of the role of these thresholds is necessary to explore the potential for further SlpEn optimisations. Some works have already addressed the role of δ, but in this paper, we extend this investigation to include the role of γ and explore the impact of using an asymmetric scheme to select threshold values. We conduct a comparative analysis between the standard SlpEn method as initially proposed and an optimised version obtained through a grid search to maximise signal classification performance based on SlpEn. The results confirm that the optimised version achieves higher time series classification accuracy, albeit at the cost of significantly increased computational complexity. Full article
Show Figures

Figure 1

11 pages, 599 KiB  
Article
Exploring Physiological Differences in Brain Areas Using Statistical Complexity Analysis of BOLD Signals
by Catalina Morales-Rojas, Ronney B. Panerai and José Luis Jara
Entropy 2024, 26(1), 81; https://doi.org/10.3390/e26010081 - 18 Jan 2024
Viewed by 842
Abstract
The brain is a fundamental organ for the human body to function properly, for which it needs to receive a continuous flow of blood, which explains the existence of control mechanisms that act to maintain this flow as constant as possible in a [...] Read more.
The brain is a fundamental organ for the human body to function properly, for which it needs to receive a continuous flow of blood, which explains the existence of control mechanisms that act to maintain this flow as constant as possible in a process known as cerebral autoregulation. One way to obtain information on how the levels of oxygen supplied to the brain vary is through of BOLD (Magnetic Resonance) images, which have the advantage of greater spatial resolution than other forms of measurement, such as transcranial Doppler. However, they do not provide good temporal resolution nor allow for continuous prolonged examination. Thus, it is of great importance to find a method to detect regional differences from short BOLD signals. One of the existing alternatives is complexity measures that can detect changes in the variability and temporal organisation of a signal that could reflect different physiological states. The so-called statistical complexity, created to overcome the shortcomings of entropy alone to explain the concept of complexity, has shown potential with haemodynamic signals. The aim of this study is to determine by using statistical complexity whether it is possible to find differences between physiologically distinct brain areas in healthy individuals. The data set includes BOLD images of 10 people obtained at the University Hospital of Leicester NHS Trust with a 1.5 Tesla magnetic resonance imaging scanner. The data were captured for 180 s at a frequency of 1 Hz. Using various combinations of statistical complexities, no differences were found between hemispheres. However, differences were detected between grey matter and white matter, indicating that these measurements are sensitive to differences in brain tissues. Full article
(This article belongs to the Special Issue Nonlinear Methods for Biomedical Engineering)
Show Figures

Figure 1

28 pages, 1120 KiB  
Article
Quantum Secure Multi-Party Summation with Graph State
by Yaohua Lu and Gangyi Ding
Entropy 2024, 26(1), 80; https://doi.org/10.3390/e26010080 - 17 Jan 2024
Viewed by 1032
Abstract
Quantum secure multi-party summation (QSMS) is a fundamental problem in quantum secure multi-party computation (QSMC), wherein multiple parties compute the sum of their data without revealing them. This paper proposes a novel QSMS protocol based on graph state, which offers enhanced security, usability, [...] Read more.
Quantum secure multi-party summation (QSMS) is a fundamental problem in quantum secure multi-party computation (QSMC), wherein multiple parties compute the sum of their data without revealing them. This paper proposes a novel QSMS protocol based on graph state, which offers enhanced security, usability, and flexibility compared to existing methods. The protocol leverages the structural advantages of graph state and employs random graph state structures and random encryption gate operations to provide stronger security. Additionally, the stabilizer of the graph state is utilized to detect eavesdroppers and channel noise without the need for decoy bits. The protocol allows for the arbitrary addition and deletion of participants, enabling greater flexibility. Experimental verification is conducted to demonstrate the security, effectiveness, and practicality of the proposed protocols. The correctness and security of the protocols are formally proven. The QSMS method based on graph state introduces new opportunities for QSMC. It highlights the potential of leveraging quantum graph state technology to securely and efficiently solve various multi-party computation problems. Full article
(This article belongs to the Special Issue Quantum and Classical Physical Cryptography)
Show Figures

Figure 1

26 pages, 1507 KiB  
Article
Entropy Estimators for Markovian Sequences: A Comparative Analysis
by Juan De Gregorio, David Sánchez and Raúl Toral
Entropy 2024, 26(1), 79; https://doi.org/10.3390/e26010079 - 17 Jan 2024
Cited by 2 | Viewed by 1020
Abstract
Entropy estimation is a fundamental problem in information theory that has applications in various fields, including physics, biology, and computer science. Estimating the entropy of discrete sequences can be challenging due to limited data and the lack of unbiased estimators. Most existing entropy [...] Read more.
Entropy estimation is a fundamental problem in information theory that has applications in various fields, including physics, biology, and computer science. Estimating the entropy of discrete sequences can be challenging due to limited data and the lack of unbiased estimators. Most existing entropy estimators are designed for sequences of independent events and their performances vary depending on the system being studied and the available data size. In this work, we compare different entropy estimators and their performance when applied to Markovian sequences. Specifically, we analyze both binary Markovian sequences and Markovian systems in the undersampled regime. We calculate the bias, standard deviation, and mean squared error for some of the most widely employed estimators. We discuss the limitations of entropy estimation as a function of the transition probabilities of the Markov processes and the sample size. Overall, this paper provides a comprehensive comparison of entropy estimators and their performance in estimating entropy for systems with memory, which can be useful for researchers and practitioners in various fields. Full article
Show Figures

Figure 1

12 pages, 426 KiB  
Article
Uncertainty in GNN Learning Evaluations: A Comparison between Measures for Quantifying Randomness in GNN Community Detection
by William Leeney and Ryan McConville
Entropy 2024, 26(1), 78; https://doi.org/10.3390/e26010078 - 17 Jan 2024
Viewed by 877
Abstract
(1) The enhanced capability of graph neural networks (GNNs) in unsupervised community detection of clustered nodes is attributed to their capacity to encode both the connectivity and feature information spaces of graphs. The identification of latent communities holds practical significance in various domains, [...] Read more.
(1) The enhanced capability of graph neural networks (GNNs) in unsupervised community detection of clustered nodes is attributed to their capacity to encode both the connectivity and feature information spaces of graphs. The identification of latent communities holds practical significance in various domains, from social networks to genomics. Current real-world performance benchmarks are perplexing due to the multitude of decisions influencing GNN evaluations for this task. (2) Three metrics are compared to assess the consistency of algorithm rankings in the presence of randomness. The consistency and quality of performance between the results under a hyperparameter optimisation with the default hyperparameters is evaluated. (3) The results compare hyperparameter optimisation with default hyperparameters, revealing a significant performance loss when neglecting hyperparameter investigation. A comparison of metrics indicates that ties in ranks can substantially alter the quantification of randomness. (4) Ensuring adherence to the same evaluation criteria may result in notable differences in the reported performance of methods for this task. The W randomness coefficient, based on the Wasserstein distance, is identified as providing the most robust assessment of randomness. Full article
Show Figures

Figure 1

14 pages, 326 KiB  
Article
Multi-Additivity in Kaniadakis Entropy
by Antonio M. Scarfone and Tatsuaki Wada
Entropy 2024, 26(1), 77; https://doi.org/10.3390/e26010077 - 17 Jan 2024
Viewed by 854
Abstract
It is known that Kaniadakis entropy, a generalization of the Shannon–Boltzmann–Gibbs entropic form, is always super-additive for any bipartite statistically independent distributions. In this paper, we show that when imposing a suitable constraint, there exist classes of maximal entropy distributions labeled by a [...] Read more.
It is known that Kaniadakis entropy, a generalization of the Shannon–Boltzmann–Gibbs entropic form, is always super-additive for any bipartite statistically independent distributions. In this paper, we show that when imposing a suitable constraint, there exist classes of maximal entropy distributions labeled by a positive real number >0 that makes Kaniadakis entropy multi-additive, i.e., Sκ[pAB]=(1+)Sκ[pA]+Sκ[pB], under the composition of two statistically independent and identically distributed distributions pAB(x,y)=pA(x)pB(y), with reduced distributions pA(x) and pB(y) belonging to the same class. Full article
Show Figures

Figure 1

11 pages, 991 KiB  
Article
(Nano)Granules-Involving Aggregation at a Passage to the Nanoscale as Viewed in Terms of a Diffusive Heisenberg Relation
by Adam Gadomski
Entropy 2024, 26(1), 76; https://doi.org/10.3390/e26010076 - 17 Jan 2024
Viewed by 1123
Abstract
We are looking at an aggregation of matter into granules. Diffusion plays a pivotal role here. When going down to the nanometer scale (the so-called nanoscale quantum-size effect limit), quantum mechanics, and the Heisenberg uncertainty relation, may take over the role of classical [...] Read more.
We are looking at an aggregation of matter into granules. Diffusion plays a pivotal role here. When going down to the nanometer scale (the so-called nanoscale quantum-size effect limit), quantum mechanics, and the Heisenberg uncertainty relation, may take over the role of classical diffusion, as viewed typically in the mesoscopic/stochastic limit. A d-dimensional entropy-production aggregation of the granules-involving matter in the granule-size space is considered in terms of a (sub)diffusive realization. It turns out that when taking a full d-dimensional pathway of the aggregation toward the nanoscale, one is capable of disclosing a Heisenberg-type (diffusional) relation, setting up an upper uncertainty bound for the (sub)diffusive, very slow granules-including environment that, within the granule-size analogy invoked, matches the quantum limit of h/2πμ (μ—average mass of a granule; h—the Planck’s constant) for the diffusion coefficient of the aggregation, first proposed by Fürth in 1933 and qualitatively foreseen by Schrödinger some years before, with both in the context of a diffusing particle. The classical quantum passage uncovered here, also termed insightfully as the quantum-size effect (as borrowed from the quantum dots’ parlance), works properly for the three-dimensional (d = 3) case, making use of a substantial physical fact that the (nano)granules interact readily via their surfaces with the also-granular surroundings in which they are immersed. This natural observation is embodied in the basic averaging construction of the diffusion coefficient of the entropy-productive (nano)aggregation of interest. Full article
(This article belongs to the Special Issue Matter-Aggregating Systems at a Classical vs. Quantum Interface)
Show Figures

Figure 1

17 pages, 1843 KiB  
Article
Quantum Measurements and Delays in Scattering by Zero-Range Potentials
by Xabier Gutiérrez, Marisa Pons and Dmitri Sokolovski
Entropy 2024, 26(1), 75; https://doi.org/10.3390/e26010075 - 16 Jan 2024
Viewed by 770
Abstract
Eisenbud–Wigner–Smith delay and the Larmor time give different estimates for the duration of a quantum scattering event. The difference is most pronounced in the case where the de Broglie wavelength is large compared to the size of the scatterer. We use the methods [...] Read more.
Eisenbud–Wigner–Smith delay and the Larmor time give different estimates for the duration of a quantum scattering event. The difference is most pronounced in the case where the de Broglie wavelength is large compared to the size of the scatterer. We use the methods of quantum measurement theory to analyse both approaches and to decide which one of them, if any, describes the duration a particle spends in the region that contains the scattering potential. The cases of transmission, reflection, and three-dimensional elastic scattering are discussed in some detail. Full article
(This article belongs to the Special Issue Quantum Mechanics and the Challenge of Time)
Show Figures

Figure 1

26 pages, 1995 KiB  
Article
Identity-Based Matchmaking Encryption with Equality Test
by Zhen Yan, Xijun Lin, Xiaoshuai Zhang, Jianliang Xu and Haipeng Qu
Entropy 2024, 26(1), 74; https://doi.org/10.3390/e26010074 - 15 Jan 2024
Viewed by 823
Abstract
The identity-based encryption with equality test (IBEET) has become a hot research topic in cloud computing as it provides an equality test for ciphertexts generated under different identities while preserving the confidentiality. Subsequently, for the sake of the confidentiality and authenticity of the [...] Read more.
The identity-based encryption with equality test (IBEET) has become a hot research topic in cloud computing as it provides an equality test for ciphertexts generated under different identities while preserving the confidentiality. Subsequently, for the sake of the confidentiality and authenticity of the data, the identity-based signcryption with equality test (IBSC-ET) has been put forward. Nevertheless, the existing schemes do not consider the anonymity of the sender and the receiver, which leads to the potential leakage of sensitive personal information. How to ensure confidentiality, authenticity, and anonymity in the IBEET setting remains a significant challenge. In this paper, we put forward the concept of the identity-based matchmaking encryption with equality test (IBME-ET) to address this issue. We formalized the system model, the definition, and the security models of the IBME-ET and, then, put forward a concrete scheme. Furthermore, our scheme was confirmed to be secure and practical by proving its security and evaluating its performance. Full article
(This article belongs to the Special Issue Advances in Information Sciences and Applications II)
Show Figures

Figure 1

15 pages, 7064 KiB  
Article
Study on Microstructure and High Temperature Stability of WTaVTiZrx Refractory High Entropy Alloy Prepared by Laser Cladding
by Xiaoyu Ding, Weigui Wang, Haojie Zhang, Xueqin Tian, Laima Luo, Yucheng Wu and Jianhua Yao
Entropy 2024, 26(1), 73; https://doi.org/10.3390/e26010073 - 15 Jan 2024
Viewed by 896
Abstract
The extremely harsh environment of the high temperature plasma imposes strict requirements on the construction materials of the first wall in a fusion reactor. In this work, a refractory alloy system, WTaVTiZrx, with low activation and high entropy, was theoretically designed [...] Read more.
The extremely harsh environment of the high temperature plasma imposes strict requirements on the construction materials of the first wall in a fusion reactor. In this work, a refractory alloy system, WTaVTiZrx, with low activation and high entropy, was theoretically designed based on semi-empirical formula and produced using a laser cladding method. The effects of Zr proportions on the metallographic microstructure, phase composition, and alloy chemistry of a high-entropy alloy cladding layer were investigated using a metallographic microscope, XRD (X-ray diffraction), SEM (scanning electron microscope), and EDS (energy dispersive spectrometer), respectively. The high-entropy alloys have a single-phase BCC structure, and the cladding layers exhibit a typical dendritic microstructure feature. The evolution of microstructure and mechanical properties of the high-entropy alloys, with respect to annealing temperature, was studied to reveal the performance stability of the alloy at a high temperature. The microstructure of the annealed samples at 900 °C for 5–10 h did not show significant changes compared to the as-cast samples, and the microhardness increased to 988.52 HV, which was higher than that of the as-cast samples (725.08 HV). When annealed at 1100 °C for 5 h, the microstructure remained unchanged, and the microhardness increased. However, after annealing for 10 h, black substances appeared in the microstructure, and the microhardness decreased, but it was still higher than the matrix. When annealed at 1200 °C for 5–10 h, the microhardness did not increase significantly compared to the as-cast samples, and after annealing for 10 h, the microhardness was even lower than that of the as-cast samples. The phase of the high entropy alloy did not change significantly after high-temperature annealing, indicating good phase stability at high temperatures. After annealing for 10 h, the microhardness was lower than that of the as-cast samples. The phase of the high entropy alloy remained unchanged after high-temperature annealing, demonstrating good phase stability at high temperatures. Full article
Show Figures

Figure 1

19 pages, 5625 KiB  
Article
Optimal Robust Control of Nonlinear Systems with Unknown Dynamics via NN Learning with Relaxed Excitation
by Rui Luo, Zhinan Peng and Jiangping Hu
Entropy 2024, 26(1), 72; https://doi.org/10.3390/e26010072 - 14 Jan 2024
Viewed by 772
Abstract
This paper presents an adaptive learning structure based on neural networks (NNs) to solve the optimal robust control problem for nonlinear continuous-time systems with unknown dynamics and disturbances. First, a system identifier is introduced to approximate the unknown system matrices and disturbances with [...] Read more.
This paper presents an adaptive learning structure based on neural networks (NNs) to solve the optimal robust control problem for nonlinear continuous-time systems with unknown dynamics and disturbances. First, a system identifier is introduced to approximate the unknown system matrices and disturbances with the help of NNs and parameter estimation techniques. To obtain the optimal solution of the optimal robust control problem, a critic learning control structure is proposed to compute the approximate controller. Unlike existing identifier-critic NNs learning control methods, novel adaptive tuning laws based on Kreisselmeier’s regressor extension and mixing technique are designed to estimate the unknown parameters of the two NNs under relaxed persistence of excitation conditions. Furthermore, theoretical analysis is also given to prove the significant relaxation of the proposed convergence conditions. Finally, effectiveness of the proposed learning approach is demonstrated via a simulation study. Full article
(This article belongs to the Special Issue Intelligent Modeling and Control)
Show Figures

Figure 1

28 pages, 558 KiB  
Article
Leakage Benchmarking for Universal Gate Sets
by Bujiao Wu, Xiaoyang Wang, Xiao Yuan, Cupjin Huang and Jianxin Chen
Entropy 2024, 26(1), 71; https://doi.org/10.3390/e26010071 - 13 Jan 2024
Cited by 1 | Viewed by 807
Abstract
Errors are common issues in quantum computing platforms, among which leakage is one of the most-challenging to address. This is because leakage, i.e., the loss of information stored in the computational subspace to undesired subspaces in a larger Hilbert space, is more difficult [...] Read more.
Errors are common issues in quantum computing platforms, among which leakage is one of the most-challenging to address. This is because leakage, i.e., the loss of information stored in the computational subspace to undesired subspaces in a larger Hilbert space, is more difficult to detect and correct than errors that preserve the computational subspace. As a result, leakage presents a significant obstacle to the development of fault-tolerant quantum computation. In this paper, we propose an efficient and accurate benchmarking framework called leakage randomized benchmarking (LRB), for measuring leakage rates on multi-qubit quantum systems. Our approach is more insensitive to state preparation and measurement (SPAM) noise than existing leakage benchmarking protocols, requires fewer assumptions about the gate set itself, and can be used to benchmark multi-qubit leakages, which has not been achieved previously. We also extended the LRB protocol to an interleaved variant called interleaved LRB (iLRB), which can benchmark the average leakage rate of generic n-site quantum gates with reasonable noise assumptions. We demonstrate the iLRB protocol on benchmarking generic two-qubit gates realized using flux tuning and analyzed the behavior of iLRB under corresponding leakage models. Our numerical experiments showed good agreement with the theoretical estimations, indicating the feasibility of both the LRB and iLRB protocols. Full article
(This article belongs to the Special Issue Quantum Computing in the NISQ Era)
Show Figures

Figure 1

25 pages, 842 KiB  
Article
Enhancing Exchange-Traded Fund Price Predictions: Insights from Information-Theoretic Networks and Node Embeddings
by Insu Choi and Woo Chang Kim
Entropy 2024, 26(1), 70; https://doi.org/10.3390/e26010070 - 12 Jan 2024
Viewed by 737
Abstract
This study presents a novel approach to predicting price fluctuations for U.S. sector index ETFs. By leveraging information-theoretic measures like mutual information and transfer entropy, we constructed threshold networks highlighting nonlinear dependencies between log returns and trading volume rate changes. We derived centrality [...] Read more.
This study presents a novel approach to predicting price fluctuations for U.S. sector index ETFs. By leveraging information-theoretic measures like mutual information and transfer entropy, we constructed threshold networks highlighting nonlinear dependencies between log returns and trading volume rate changes. We derived centrality measures and node embeddings from these networks, offering unique insights into the ETFs’ dynamics. By integrating these features into gradient-boosting algorithm-based models, we significantly enhanced the predictive accuracy. Our approach offers improved forecast performance for U.S. sector index futures and adds a layer of explainability to the existing literature. Full article
(This article belongs to the Special Issue Information Theory-Based Approach to Portfolio Optimization)
Show Figures

Figure 1

14 pages, 302 KiB  
Article
Electrodynamics of Superconductors: From Lorentz to Galilei at Zero Temperature
by Luca Salasnich
Entropy 2024, 26(1), 69; https://doi.org/10.3390/e26010069 - 12 Jan 2024
Viewed by 785
Abstract
We discuss the derivation of the electrodynamics of superconductors coupled to the electromagnetic field from a Lorentz-invariant bosonic model of Cooper pairs. Our results are obtained at zero temperature where, according to the third law of thermodynamics, the entropy of the system is [...] Read more.
We discuss the derivation of the electrodynamics of superconductors coupled to the electromagnetic field from a Lorentz-invariant bosonic model of Cooper pairs. Our results are obtained at zero temperature where, according to the third law of thermodynamics, the entropy of the system is zero. In the nonrelativistic limit, we obtain a Galilei-invariant superconducting system, which differs with respect to the familiar Schrödinger-like one. From this point of view, there are similarities with the Pauli equation of fermions, which is derived from the Dirac equation in the nonrelativistic limit and has a spin-magnetic field term in contrast with the Schrödinger equation. One of the peculiar effects of our model is the decay of a static electric field inside a superconductor exactly with the London penetration length. In addition, our theory predicts a modified D’Alembert equation for the massive electromagnetic field also in the case of nonrelativistic superconducting matter. We emphasize the role of the Nambu–Goldstone phase field, which is crucial to obtain the collective modes of the superconducting matter field. In the special case of a nonrelativistic neutral superfluid, we find a gapless Bogoliubov-like spectrum, while for the charged superfluid we obtain a dispersion relation that is gapped by the plasma frequency. Full article
(This article belongs to the Section Statistical Physics)
13 pages, 328 KiB  
Article
Square Root Statistics of Density Matrices and Their Applications
by Lyuzhou Ye, Youyi Huang, James C. Osborn and Lu Wei
Entropy 2024, 26(1), 68; https://doi.org/10.3390/e26010068 - 12 Jan 2024
Viewed by 780
Abstract
To estimate the degree of quantum entanglement of random pure states, it is crucial to understand the statistical behavior of entanglement indicators such as the von Neumann entropy, quantum purity, and entanglement capacity. These entanglement metrics are functions of the spectrum of density [...] Read more.
To estimate the degree of quantum entanglement of random pure states, it is crucial to understand the statistical behavior of entanglement indicators such as the von Neumann entropy, quantum purity, and entanglement capacity. These entanglement metrics are functions of the spectrum of density matrices, and their statistical behavior over different generic state ensembles have been intensively studied in the literature. As an alternative metric, in this work, we study the sum of the square root spectrum of density matrices, which is relevant to negativity and fidelity in quantum information processing. In particular, we derive the finite-size mean and variance formulas of the sum of the square root spectrum over the Bures–Hall ensemble, extending known results obtained recently over the Hilbert–Schmidt ensemble. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

21 pages, 6690 KiB  
Review
Topological Data Analysis in Cardiovascular Signals: An Overview
by Enrique Hernández-Lemus, Pedro Miramontes and Mireya Martínez-García
Entropy 2024, 26(1), 67; https://doi.org/10.3390/e26010067 - 12 Jan 2024
Viewed by 1286
Abstract
Topological data analysis (TDA) is a recent approach for analyzing and interpreting complex data sets based on ideas a branch of mathematics called algebraic topology. TDA has proven useful to disentangle non-trivial data structures in a broad range of data analytics problems including [...] Read more.
Topological data analysis (TDA) is a recent approach for analyzing and interpreting complex data sets based on ideas a branch of mathematics called algebraic topology. TDA has proven useful to disentangle non-trivial data structures in a broad range of data analytics problems including the study of cardiovascular signals. Here, we aim to provide an overview of the application of TDA to cardiovascular signals and its potential to enhance the understanding of cardiovascular diseases and their treatment in the form of a literature or narrative review. We first introduce the concept of TDA and its key techniques, including persistent homology, Mapper, and multidimensional scaling. We then discuss the use of TDA in analyzing various cardiovascular signals, including electrocardiography, photoplethysmography, and arterial stiffness. We also discuss the potential of TDA to improve the diagnosis and prognosis of cardiovascular diseases, as well as its limitations and challenges. Finally, we outline future directions for the use of TDA in cardiovascular signal analysis and its potential impact on clinical practice. Overall, TDA shows great promise as a powerful tool for the analysis of complex cardiovascular signals and may offer significant insights into the understanding and management of cardiovascular diseases. Full article
(This article belongs to the Special Issue Nonlinear Dynamics in Cardiovascular Signals)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop