entropy-logo

Journal Browser

Journal Browser

Causality and Complex Systems

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Complexity".

Deadline for manuscript submissions: closed (15 December 2024) | Viewed by 43716

Special Issue Editors


E-Mail Website
Guest Editor
School of Systems Sciences, Beijing Normal University, Beijing 100875, China
Interests: complex systems; machine learning; causal emergence; scaling theory
Department of Computer Science, Tsinghua University, Beijing 100084, China
Interests: artificial intelligence; stable learning; complex systems; graph neural network; causality; information theory; recommendation system; machine learning; data mining; multimedia; network embedding.

E-Mail Website
Guest Editor
School of Biomedical Engineering and Imaging Sciences, King’s College London, London WC2R 2LS, UK
Interests: algorithmic information theory; complex systems; philosophy of algorithmic randomness; Kolmogorov complexity; computational biology; digital philosophy; programmability; natural computation; cellular automata; algorithmic probability; logical depth; measures of sophistication
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Complex Systems, e.g., living systems, environmental systems, health and medical systems, socioeconomic systems, and even online communities are all unified wholes that are formed by a large number of interacting units. One of the reasons for the system complexity is the wide existence of entangled causal structures. The causality may be emergent, which means stronger causal laws may exist at the macro-scale rather than the micro-scale (e.g., statistical mechanics). The causal force also may cross different levels. For example, downward causality, meaning the collective behaviors or the aggregated variables of the whole system (e.g., price) may have an effect on individual behaviors that widely exist in living systems or social systems. Therefore, understanding the emergence and evolution of these causal structures on various scales within a large complex system is very important.

However, how to discover intricate causal relationships and identify emerging causal laws on a macro level from the behavioral data of complex systems, and how to use these causal relationships and laws to infer new information are all tough problems. New emergent machine learning technologies (e.g., causal representation learning, causal reinforcement learning); information theory (e.g., information decomposition); causal discovery; causal inference, and so forth will offer us new solutions. This Special Issue focuses on, but is not limited to, the following topics:

  • Causal discovery;
  • Causal inference;
  • Causal emergence;
  • Downward causality;
  • Measures of complexity and causality;
  • Complex system modeling;
  • Causal machine learning;
  • Causal representation learning;
  • Causal reinforcement learning;
  • Information decomposition;
  • Related applications.

Prof. Dr. Jiang Zhang
Dr. Peng Cui
Dr. Hector Zenil
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • causal discovery
  • causal inference
  • causal emergence
  • downward causality
  • causal representation learning
  • causal reinforcement learning
  • renormalization
  • multi-scale analysis
  • information decomposition
  • information geometry

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:
18 pages, 952 KiB  
Article
Causal Factor Disentanglement for Few-Shot Domain Adaptation in Video Prediction
by Nathan Cornille, Katrien Laenen, Jingyuan Sun and Marie-Francine Moens
Entropy 2023, 25(11), 1554; https://doi.org/10.3390/e25111554 - 17 Nov 2023
Viewed by 1509
Abstract
An important challenge in machine learning is performing with accuracy when few training samples are available from the target distribution. If a large number of training samples from a related distribution are available, transfer learning can be used to improve the performance. This [...] Read more.
An important challenge in machine learning is performing with accuracy when few training samples are available from the target distribution. If a large number of training samples from a related distribution are available, transfer learning can be used to improve the performance. This paper investigates how to do transfer learning more effectively if the source and target distributions are related through a Sparse Mechanism Shift for the application of next-frame prediction. We create Sparse Mechanism Shift-TempoRal Intervened Sequences (SMS-TRIS), a benchmark to evaluate transfer learning for next-frame prediction derived from the TRIS datasets. We then propose to exploit the Sparse Mechanism Shift property of the distribution shift by disentangling the model parameters with regard to the true causal mechanisms underlying the data. We use the Causal Identifiability from TempoRal Intervened Sequences (CITRIS) model to achieve this disentanglement via causal representation learning. We show that encouraging disentanglement with the CITRIS extensions can improve performance, but their effectiveness varies depending on the dataset and backbone used. We find that it is effective only when encouraging disentanglement actually succeeds in increasing disentanglement. We also show that an alternative method designed for domain adaptation does not help, indicating the challenging nature of the SMS-TRIS benchmark. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

25 pages, 1320 KiB  
Article
Kernel-Based Independence Tests for Causal Structure Learning on Functional Data
by Felix Laumann, Julius von Kügelgen, Junhyung Park, Bernhard Schölkopf and Mauricio Barahona
Entropy 2023, 25(12), 1597; https://doi.org/10.3390/e25121597 - 28 Nov 2023
Cited by 2 | Viewed by 1841
Abstract
Measurements of systems taken along a continuous functional dimension, such as time or space, are ubiquitous in many fields, from the physical and biological sciences to economics and engineering. Such measurements can be viewed as realisations of an underlying smooth process sampled over [...] Read more.
Measurements of systems taken along a continuous functional dimension, such as time or space, are ubiquitous in many fields, from the physical and biological sciences to economics and engineering. Such measurements can be viewed as realisations of an underlying smooth process sampled over the continuum. However, traditional methods for independence testing and causal learning are not directly applicable to such data, as they do not take into account the dependence along the functional dimension. By using specifically designed kernels, we introduce statistical tests for bivariate, joint, and conditional independence for functional variables. Our method not only extends the applicability to functional data of the Hilbert–Schmidt independence criterion (hsic) and its d-variate version (d-hsic), but also allows us to introduce a test for conditional independence by defining a novel statistic for the conditional permutation test (cpt) based on the Hilbert–Schmidt conditional independence criterion (hscic), with optimised regularisation strength estimated through an evaluation rejection rate. Our empirical results of the size and power of these tests on synthetic functional data show good performance, and we then exemplify their application to several constraint- and regression-based causal structure learning problems, including both synthetic examples and real socioeconomic data. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

17 pages, 930 KiB  
Article
Detection of Anticipatory Dynamics between a Pair of Zebrafish
by Wei-Jie Chen, I-Shih Ko, Chi-An Lin, Chun-Jen Chen, Jiun-Shian Wu and C. K. Chan
Entropy 2024, 26(1), 13; https://doi.org/10.3390/e26010013 - 21 Dec 2023
Viewed by 1273
Abstract
Anticipatory dynamics (AD) is unusual in that responses from an information receiver can appear ahead of triggers from the source, and direction of information flow (DIF) is needed to establish causality. Although it is believed that anticipatory dynamics is important for animals’ survival, [...] Read more.
Anticipatory dynamics (AD) is unusual in that responses from an information receiver can appear ahead of triggers from the source, and direction of information flow (DIF) is needed to establish causality. Although it is believed that anticipatory dynamics is important for animals’ survival, natural examples are rare. Time series (trajectories) from a pair of interacting zebrafish are used to look for the existence of AD in natural systems. In order to obtain the DIF between the two trajectories, we have made use of a special experimental design to designate information source. However, we have also used common statistical tools such as Granger causality and transfer entropy to detect DIF. In our experiments, we found that a majority of the fish pairs do not show any anticipatory behaviors and only a few pairs displayed possible AD. Interestingly, for fish in this latter group, they do not display AD all the time. Our findings suggest that the formation of schooling of fish might not need the help of AD, and new tools are needed in the detection of causality in AD system. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

19 pages, 1776 KiB  
Article
Neural Causal Information Extractor for Unobserved Causes
by Keng-Hou Leong, Yuxuan Xiu, Bokui Chen and Wai Kin (Victor) Chan
Entropy 2024, 26(1), 46; https://doi.org/10.3390/e26010046 - 31 Dec 2023
Viewed by 1811
Abstract
Causal inference aims to faithfully depict the causal relationships between given variables. However, in many practical systems, variables are often partially observed, and some unobserved variables could carry significant information and induce causal effects on a target. Identifying these unobserved causes remains a [...] Read more.
Causal inference aims to faithfully depict the causal relationships between given variables. However, in many practical systems, variables are often partially observed, and some unobserved variables could carry significant information and induce causal effects on a target. Identifying these unobserved causes remains a challenge, and existing works have not considered extracting the unobserved causes while retaining the causes that have already been observed and included. In this work, we aim to construct the implicit variables with a generator–discriminator framework named the Neural Causal Information Extractor (NCIE), which can complement the information of unobserved causes and thus provide a complete set of causes with both observed causes and the representations of unobserved causes. By maximizing the mutual information between the targets and the union of observed causes and implicit variables, the implicit variables we generate could complement the information that the unobserved causes should have provided. The synthetic experiments show that the implicit variables preserve the information and dynamics of the unobserved causes. In addition, extensive real-world time series prediction tasks show improved precision after introducing implicit variables, thus indicating their causality to the targets. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

56 pages, 2990 KiB  
Review
Emergence and Causality in Complex Systems: A Survey of Causal Emergence and Related Quantitative Studies
by Bing Yuan, Jiang Zhang, Aobo Lyu, Jiayun Wu, Zhipeng Wang, Mingzhe Yang, Kaiwei Liu, Muyun Mou and Peng Cui
Entropy 2024, 26(2), 108; https://doi.org/10.3390/e26020108 - 24 Jan 2024
Cited by 2 | Viewed by 8828
Abstract
Emergence and causality are two fundamental concepts for understanding complex systems. They are interconnected. On one hand, emergence refers to the phenomenon where macroscopic properties cannot be solely attributed to the cause of individual properties. On the other hand, causality can exhibit emergence, [...] Read more.
Emergence and causality are two fundamental concepts for understanding complex systems. They are interconnected. On one hand, emergence refers to the phenomenon where macroscopic properties cannot be solely attributed to the cause of individual properties. On the other hand, causality can exhibit emergence, meaning that new causal laws may arise as we increase the level of abstraction. Causal emergence (CE) theory aims to bridge these two concepts and even employs measures of causality to quantify emergence. This paper provides a comprehensive review of recent advancements in quantitative theories and applications of CE. It focuses on two primary challenges: quantifying CE and identifying it from data. The latter task requires the integration of machine learning and neural network techniques, establishing a significant link between causal emergence and machine learning. We highlight two problem categories: CE with machine learning and CE for machine learning, both of which emphasize the crucial role of effective information (EI) as a measure of causal emergence. The final section of this review explores potential applications and provides insights into future perspectives. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

30 pages, 1872 KiB  
Article
An Exact Theory of Causal Emergence for Linear Stochastic Iteration Systems
by Kaiwei Liu, Bing Yuan and Jiang Zhang
Entropy 2024, 26(8), 618; https://doi.org/10.3390/e26080618 - 23 Jul 2024
Cited by 1 | Viewed by 1183
Abstract
After coarse-graining a complex system, the dynamics of its macro-state may exhibit more pronounced causal effects than those of its micro-state. This phenomenon, known as causal emergence, is quantified by the indicator of effective information. However, two challenges confront this theory: the absence [...] Read more.
After coarse-graining a complex system, the dynamics of its macro-state may exhibit more pronounced causal effects than those of its micro-state. This phenomenon, known as causal emergence, is quantified by the indicator of effective information. However, two challenges confront this theory: the absence of well-developed frameworks in continuous stochastic dynamical systems and the reliance on coarse-graining methodologies. In this study, we introduce an exact theoretic framework for causal emergence within linear stochastic iteration systems featuring continuous state spaces and Gaussian noise. Building upon this foundation, we derive an analytical expression for effective information across general dynamics and identify optimal linear coarse-graining strategies that maximize the degree of causal emergence when the dimension averaged uncertainty eliminated by coarse-graining has an upper bound. Our investigation reveals that the maximal causal emergence and the optimal coarse-graining methods are primarily determined by the principal eigenvalues and eigenvectors of the dynamic system’s parameter matrix, with the latter not being unique. To validate our propositions, we apply our analytical models to three simplified physical systems, comparing the outcomes with numerical simulations, and consistently achieve congruent results. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

14 pages, 578 KiB  
Article
A Synergistic Perspective on Multivariate Computation and Causality in Complex Systems
by Thomas F. Varley
Entropy 2024, 26(10), 883; https://doi.org/10.3390/e26100883 - 21 Oct 2024
Viewed by 1281
Abstract
What does it mean for a complex system to “compute” or perform “computations”? Intuitively, we can understand complex “computation” as occurring when a system’s state is a function of multiple inputs (potentially including its own past state). Here, we discuss how computational processes [...] Read more.
What does it mean for a complex system to “compute” or perform “computations”? Intuitively, we can understand complex “computation” as occurring when a system’s state is a function of multiple inputs (potentially including its own past state). Here, we discuss how computational processes in complex systems can be generally studied using the concept of statistical synergy, which is information about an output that can only be learned when the joint state of all inputs is known. Building on prior work, we show that this approach naturally leads to a link between multivariate information theory and topics in causal inference, specifically, the phenomenon of causal colliders. We begin by showing how Berkson’s paradox implies a higher-order, synergistic interaction between multidimensional inputs and outputs. We then discuss how causal structure learning can refine and orient analyses of synergies in empirical data, and when empirical synergies meaningfully reflect computation versus when they may be spurious. We end by proposing that this conceptual link between synergy, causal colliders, and computation can serve as a foundation on which to build a mathematically rich general theory of computation in complex systems. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

22 pages, 3678 KiB  
Article
Robust Model-Free Identification of the Causal Networks Underlying Complex Nonlinear Systems
by Guanxue Yang, Shimin Lei and Guanxiao Yang
Entropy 2024, 26(12), 1063; https://doi.org/10.3390/e26121063 - 6 Dec 2024
Viewed by 424
Abstract
Inferring causal networks from noisy observations is of vital importance in various fields. Due to the complexity of system modeling, the way in which universal and feasible inference algorithms are studied is a key challenge for network reconstruction. In this study, without any [...] Read more.
Inferring causal networks from noisy observations is of vital importance in various fields. Due to the complexity of system modeling, the way in which universal and feasible inference algorithms are studied is a key challenge for network reconstruction. In this study, without any assumptions, we develop a novel model-free framework to uncover only the direct relationships in networked systems from observations of their nonlinear dynamics. Our proposed methods are termed multiple-order Polynomial Conditional Granger Causality (PCGC) and sparse PCGC (SPCGC). PCGC mainly adopts polynomial functions to approximate the whole system model, which can be used to judge the interactions among nodes through subsequent nonlinear Granger causality analysis. For SPCGC, Lasso optimization is first used for dimension reduction, and then PCGC is executed to obtain the final network. Specifically, the conditional variables are fused in this general, model-free framework regardless of their formulations in the system model, which could effectively reconcile the inference of direct interactions with an indirect influence. Based on many classical dynamical systems, the performances of PCGC and SPCGC are analyzed and verified. Generally, the proposed framework could be quite promising for the provision of certain guidance for data-driven modeling with an unknown model. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

28 pages, 1013 KiB  
Perspective
Efficient, Formal, Material, and Final Causes in Biology and Technology
by George F. R. Ellis
Entropy 2023, 25(9), 1301; https://doi.org/10.3390/e25091301 - 5 Sep 2023
Cited by 4 | Viewed by 3723
Abstract
This paper considers how a classification of causal effects as comprising efficient, formal, material, and final causation can provide a useful understanding of how emergence takes place in biology and technology, with formal, material, and final causation all including cases of downward causation; [...] Read more.
This paper considers how a classification of causal effects as comprising efficient, formal, material, and final causation can provide a useful understanding of how emergence takes place in biology and technology, with formal, material, and final causation all including cases of downward causation; they each occur in both synchronic and diachronic forms. Taken together, they underlie why all emergent levels in the hierarchy of emergence have causal powers (which is Noble’s principle of biological relativity) and so why causal closure only occurs when the upwards and downwards interactions between all emergent levels are taken into account, contra to claims that some underlying physics level is by itself causality complete. A key feature is that stochasticity at the molecular level plays an important role in enabling agency to emerge, underlying the possibility of final causation occurring in these contexts. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

13 pages, 775 KiB  
Review
Comparison of Bootstrap Methods for Estimating Causality in Linear Dynamic Systems: A Review
by Fumikazu Miwakeichi and Andreas Galka
Entropy 2023, 25(7), 1070; https://doi.org/10.3390/e25071070 - 17 Jul 2023
Cited by 3 | Viewed by 1832
Abstract
In this study, we present a thorough comparison of the performance of four different bootstrap methods for assessing the significance of causal analysis in time series data. For this purpose, multivariate simulated data are generated by a linear feedback system. The methods investigated [...] Read more.
In this study, we present a thorough comparison of the performance of four different bootstrap methods for assessing the significance of causal analysis in time series data. For this purpose, multivariate simulated data are generated by a linear feedback system. The methods investigated are uncorrelated Phase Randomization Bootstrap (uPRB), which generates surrogate data with no cross-correlation between variables by randomizing the phase in the frequency domain; Time Shift Bootstrap (TSB), which generates surrogate data by randomizing the phase in the time domain; Stationary Bootstrap (SB), which calculates standard errors and constructs confidence regions for weakly dependent stationary observations; and AR-Sieve Bootstrap (ARSB), a resampling method based on AutoRegressive (AR) models that approximates the underlying data-generating process. The uPRB method accurately identifies variable interactions but fails to detect self-feedback in some variables. The TSB method, despite performing worse than uPRB, is unable to detect feedback between certain variables. The SB method gives consistent causality results, although its ability to detect self-feedback decreases, as the mean block width increases. The ARSB method shows superior performance, accurately detecting both self-feedback and causality across all variables. Regarding the analysis of the Impulse Response Function (IRF), only the ARSB method succeeds in detecting both self-feedback and causality in all variables, aligning well with the connectivity diagram. Other methods, however, show considerable variations in detection performance, with some detecting false positives and others only detecting self-feedback. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

28 pages, 4671 KiB  
Article
Neural Information Squeezer for Causal Emergence
by Jiang Zhang and Kaiwei Liu
Entropy 2023, 25(1), 26; https://doi.org/10.3390/e25010026 - 23 Dec 2022
Cited by 9 | Viewed by 3369 | Correction
Abstract
Conventional studies of causal emergence have revealed that stronger causality can be obtained on the macro-level than the micro-level of the same Markovian dynamical systems if an appropriate coarse-graining strategy has been conducted on the micro-states. However, identifying this emergent causality from data [...] Read more.
Conventional studies of causal emergence have revealed that stronger causality can be obtained on the macro-level than the micro-level of the same Markovian dynamical systems if an appropriate coarse-graining strategy has been conducted on the micro-states. However, identifying this emergent causality from data is still a difficult problem that has not been solved because the appropriate coarse-graining strategy can not be found easily. This paper proposes a general machine learning framework called Neural Information Squeezer to automatically extract the effective coarse-graining strategy and the macro-level dynamics, as well as identify causal emergence directly from time series data. By using invertible neural network, we can decompose any coarse-graining strategy into two separate procedures: information conversion and information discarding. In this way, we can not only exactly control the width of the information channel, but also can derive some important properties analytically. We also show how our framework can extract the coarse-graining functions and the dynamics on different levels, as well as identify causal emergence from the data on several exampled systems. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

15 pages, 775 KiB  
Article
Flickering Emergences: The Question of Locality in Information-Theoretic Approaches to Emergence
by Thomas F. Varley
Entropy 2023, 25(1), 54; https://doi.org/10.3390/e25010054 - 28 Dec 2022
Cited by 10 | Viewed by 3478
Abstract
“Emergence”, the phenomenon where a complex system displays properties, behaviours, or dynamics not trivially reducible to its constituent elements, is one of the defining properties of complex systems. Recently, there has been a concerted effort to formally define emergence using the mathematical framework [...] Read more.
“Emergence”, the phenomenon where a complex system displays properties, behaviours, or dynamics not trivially reducible to its constituent elements, is one of the defining properties of complex systems. Recently, there has been a concerted effort to formally define emergence using the mathematical framework of information theory, which proposes that emergence can be understood in terms of how the states of wholes and parts collectively disclose information about the system’s collective future. In this paper, we show how a common, foundational component of information-theoretic approaches to emergence implies an inherent instability to emergent properties, which we call flickering emergence. A system may, on average, display a meaningful emergent property (be it an informative coarse-graining, or higher-order synergy), but for particular configurations, that emergent property falls apart and becomes misinformative. We show existence proofs that flickering emergence occurs in two different frameworks (one based on coarse-graining and another based on multivariate information decomposition) and argue that any approach based on temporal mutual information will display it. Finally, we argue that flickering emergence should not be a disqualifying property of any model of emergence, but that it should be accounted for when attempting to theorize about how emergence relates to practical models of the natural world. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

59 pages, 15006 KiB  
Article
Causality Analysis with Information Geometry: A Comparison
by Heng Jie Choong, Eun-jin Kim and Fei He
Entropy 2023, 25(5), 806; https://doi.org/10.3390/e25050806 - 16 May 2023
Cited by 3 | Viewed by 2299
Abstract
The quantification of causality is vital for understanding various important phenomena in nature and laboratories, such as brain networks, environmental dynamics, and pathologies. The two most widely used methods for measuring causality are Granger Causality (GC) and Transfer Entropy (TE), which rely on [...] Read more.
The quantification of causality is vital for understanding various important phenomena in nature and laboratories, such as brain networks, environmental dynamics, and pathologies. The two most widely used methods for measuring causality are Granger Causality (GC) and Transfer Entropy (TE), which rely on measuring the improvement in the prediction of one process based on the knowledge of another process at an earlier time. However, they have their own limitations, e.g., in applications to nonlinear, non-stationary data, or non-parametric models. In this study, we propose an alternative approach to quantify causality through information geometry that overcomes such limitations. Specifically, based on the information rate that measures the rate of change of the time-dependent distribution, we develop a model-free approach called information rate causality that captures the occurrence of the causality based on the change in the distribution of one process caused by another. This measurement is suitable for analyzing numerically generated non-stationary, nonlinear data. The latter are generated by simulating different types of discrete autoregressive models which contain linear and nonlinear interactions in unidirectional and bidirectional time-series signals. Our results show that information rate causalitycan capture the coupling of both linear and nonlinear data better than GC and TE in the several examples explored in the paper. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

15 pages, 2121 KiB  
Article
Inferring a Causal Relationship between Environmental Factors and Respiratory Infections Using Convergent Cross-Mapping
by Daipeng Chen, Xiaodan Sun and Robert A. Cheke
Entropy 2023, 25(5), 807; https://doi.org/10.3390/e25050807 - 17 May 2023
Cited by 4 | Viewed by 2627
Abstract
The incidence of respiratory infections in the population is related to many factors, among which environmental factors such as air quality, temperature, and humidity have attracted much attention. In particular, air pollution has caused widespread discomfort and concern in developing countries. Although the [...] Read more.
The incidence of respiratory infections in the population is related to many factors, among which environmental factors such as air quality, temperature, and humidity have attracted much attention. In particular, air pollution has caused widespread discomfort and concern in developing countries. Although the correlation between respiratory infections and air pollution is well known, establishing causality between them remains elusive. In this study, by conducting theoretical analysis, we updated the procedure of performing the extended convergent cross-mapping (CCM, a method of causal inference) to infer the causality between periodic variables. Consistently, we validated this new procedure on the synthetic data generated by a mathematical model. For real data in Shaanxi province of China in the period of 1 January 2010 to 15 November 2016, we first confirmed that the refined method is applicable by investigating the periodicity of influenza-like illness cases, an air quality index, temperature, and humidity through wavelet analysis. We next illustrated that air quality (quantified by AQI), temperature, and humidity affect the daily influenza-like illness cases, and, in particular, the respiratory infection cases increased progressively with increased AQI with a time delay of 11 days. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

16 pages, 4618 KiB  
Article
Schizophrenia MEG Network Analysis Based on Kernel Granger Causality
by Qiong Wang, Wenpo Yao, Dengxuan Bai, Wanyi Yi, Wei Yan and Jun Wang
Entropy 2023, 25(7), 1006; https://doi.org/10.3390/e25071006 - 30 Jun 2023
Cited by 4 | Viewed by 1543
Abstract
Network analysis is an important approach to explore complex brain structures under different pathological and physiological conditions. In this paper, we employ the multivariate inhomogeneous polynomial kernel Granger causality (MKGC) to construct directed weighted networks to characterize schizophrenia magnetoencephalography (MEG). We first generate [...] Read more.
Network analysis is an important approach to explore complex brain structures under different pathological and physiological conditions. In this paper, we employ the multivariate inhomogeneous polynomial kernel Granger causality (MKGC) to construct directed weighted networks to characterize schizophrenia magnetoencephalography (MEG). We first generate data based on coupled autoregressive processes to test the effectiveness of MKGC in comparison with the bivariate linear Granger causality and bivariate inhomogeneous polynomial kernel Granger causality. The test results suggest that MKGC outperforms the other two methods. Based on these results, we apply MKGC to construct effective connectivity networks of MEG for patients with schizophrenia (SCZs). We measure three network features, i.e., strength, nonequilibrium, and complexity, to characterize schizophrenia MEG. Our results suggest that MEG of the healthy controls (HCs) has a denser effective connectivity network than that of SCZs. The most significant difference in the in-connectivity strength is observed in the right frontal network (p=0.001). The strongest out-connectivity strength for all subjects occurs in the temporal area, with the most significant between-group difference in the left occipital area (p=0.0018). The total connectivity strength of the frontal, temporal, and occipital areas of HCs exhibits higher values compared with SCZs. The nonequilibrium feature over the whole brain of SCZs is significantly higher than that of the HCs (p=0.012); however, the results of Shannon entropy suggest that healthy MEG networks have higher complexity than schizophrenia networks. Overall, MKGC provides a reliable approach to construct MEG brain networks and characterize the network characteristics. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

2 pages, 201 KiB  
Correction
Correction: Zhang, J.; Liu, K. Neural Information Squeezer for Causal Emergence. Entropy 2023, 25, 26
by Jiang Zhang and Kaiwei Liu
Entropy 2023, 25(10), 1387; https://doi.org/10.3390/e25101387 - 28 Sep 2023
Viewed by 795
Abstract
There was an error in the original publication [...] Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Back to TopTop