entropy-logo

Journal Browser

Journal Browser

Causal Inference for Heterogeneous Data and Information Theory

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (25 July 2022) | Viewed by 37111

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Computer Science, University of Vienna, 1010 Wien, Austria
Interests: causal inference; time series; information theory; probability; statistics; data mining

Special Issue Information

Dear Colleagues,

Detecting causal effect from observational data has attracted a lot of attention in the past decades. Much of the collected observational data however does not come from the traditional statistical setting of randomized experiments. The data collected from different sources is often heterogeneous due to changing circumstances, unobserved confounders, time-shifts in the distribution or due to the various time scales or definition domains of measures observations. These heterogeneities make it difficult to gain valid conclusions about causal effects that generalize well to new data.

This Special Issue focuses on causal inference models for heterogeneous data of (not only) the described heterogeneous nature. As working approaches and tools to be selected here is information theory, probability and to them related machine learning tools. Information theory here is to be interpreted broadly, including, for instance, classical algorithmic information theory, compression schemes, stochastic complexity, statistical information theory as well as control theory. The Special Issue is open to papers that are, in their essence of fundamental theoretical research, although demonstration on real or synthetic datasets is encouraged when possible.

This Special Issue will gather the current approaches to the following (and related) topics:

  • Linear and non-linear structural equation models
  • Causal models for categorical and integer-valued variables
  • Graphical models
  • Probabilistic interactive networks
  • Latent variables, confounders
  • Additive noise models
  • Causal inference by information-theoretic criteria
  • Kolmogorov complexity and causality
  • Transfer entropy

Dr. Kateřina Hlaváčková-Schindler
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • heterogeneous data
  • causal inference
  • information theory
  • time series
  • graphical models
  • variable selection
  • additive noise models
  • probabilistic networks
  • categorical and integer variables
  • Kolmogorov complexity

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 180 KiB  
Editorial
Causal Inference for Heterogeneous Data and Information Theory
by Kateřina Hlaváčková-Schindler
Entropy 2023, 25(6), 910; https://doi.org/10.3390/e25060910 - 8 Jun 2023
Viewed by 1479
Abstract
The present Special Issue of Entropy, entitled "Causal Inference for Heterogeneous Data and Information Theory", covers various aspects of causal inference [...] Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)

Research

Jump to: Editorial

37 pages, 1447 KiB  
Article
Universal Causality
by Sridhar Mahadevan
Entropy 2023, 25(4), 574; https://doi.org/10.3390/e25040574 - 27 Mar 2023
Cited by 4 | Viewed by 3138
Abstract
Universal Causality is a mathematical framework based on higher-order category theory, which generalizes previous approaches based on directed graphs and regular categories. We present a hierarchical framework called UCLA (Universal Causality Layered Architecture), where at the top-most level, causal interventions are modeled as [...] Read more.
Universal Causality is a mathematical framework based on higher-order category theory, which generalizes previous approaches based on directed graphs and regular categories. We present a hierarchical framework called UCLA (Universal Causality Layered Architecture), where at the top-most level, causal interventions are modeled as a higher-order category over simplicial sets and objects. Simplicial sets are contravariant functors from the category of ordinal numbers Δ into sets, and whose morphisms are order-preserving injections and surjections over finite ordered sets. Non-random interventions on causal structures are modeled as face operators that map n-simplices into lower-level simplices. At the second layer, causal models are defined as a category, for example defining the schema of a relational causal model or a symmetric monoidal category representation of DAG models. The third layer corresponds to the data layer in causal inference, where each causal object is mapped functorially into a set of instances using the category of sets and functions between sets. The fourth homotopy layer defines ways of abstractly characterizing causal models in terms of homotopy colimits, defined in terms of the nerve of a category, a functor that converts a causal (category) model into a simplicial object. Each functor between layers is characterized by a universal arrow, which define universal elements and representations through the Yoneda Lemma, and induces a Grothendieck category of elements that enables combining formal causal models with data instances, and is related to the notion of ground graphs in relational causal models. Causal inference between layers is defined as a lifting problem, a commutative diagram whose objects are categories, and whose morphisms are functors that are characterized as different types of fibrations. We illustrate UCLA using a variety of representations, including causal relational models, symmetric monoidal categorical variants of DAG models, and non-graphical representations, such as integer-valued multisets and separoids, and measure-theoretic and topological models. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

22 pages, 803 KiB  
Article
Stan and BART for Causal Inference: Estimating Heterogeneous Treatment Effects Using the Power of Stan and the Flexibility of Machine Learning
by Vincent Dorie, George Perrett, Jennifer L. Hill and Benjamin Goodrich
Entropy 2022, 24(12), 1782; https://doi.org/10.3390/e24121782 - 6 Dec 2022
Cited by 5 | Viewed by 4027
Abstract
A wide range of machine-learning-based approaches have been developed in the past decade, increasing our ability to accurately model nonlinear and nonadditive response surfaces. This has improved performance for inferential tasks such as estimating average treatment effects in situations where standard parametric models [...] Read more.
A wide range of machine-learning-based approaches have been developed in the past decade, increasing our ability to accurately model nonlinear and nonadditive response surfaces. This has improved performance for inferential tasks such as estimating average treatment effects in situations where standard parametric models may not fit the data well. These methods have also shown promise for the related task of identifying heterogeneous treatment effects. However, the estimation of both overall and heterogeneous treatment effects can be hampered when data are structured within groups if we fail to correctly model the dependence between observations. Most machine learning methods do not readily accommodate such structure. This paper introduces a new algorithm, stan4bart, that combines the flexibility of Bayesian Additive Regression Trees (BART) for fitting nonlinear response surfaces with the computational and statistical efficiencies of using Stan for the parametric components of the model. We demonstrate how stan4bart can be used to estimate average, subgroup, and individual-level treatment effects with stronger performance than other flexible approaches that ignore the multilevel structure of the data as well as multilevel approaches that have strict parametric forms. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

17 pages, 957 KiB  
Article
Targeted L1-Regularization and Joint Modeling of Neural Networks for Causal Inference
by Mehdi Rostami and Olli Saarela
Entropy 2022, 24(9), 1290; https://doi.org/10.3390/e24091290 - 13 Sep 2022
Cited by 1 | Viewed by 1404
Abstract
The calculation of the Augmented Inverse Probability Weighting (AIPW) estimator of the Average Treatment Effect (ATE) is carried out in two steps, where in the first step, the treatment and outcome are modeled, and in the second step, the predictions are inserted into [...] Read more.
The calculation of the Augmented Inverse Probability Weighting (AIPW) estimator of the Average Treatment Effect (ATE) is carried out in two steps, where in the first step, the treatment and outcome are modeled, and in the second step, the predictions are inserted into the AIPW estimator. The model misspecification in the first step has led researchers to utilize Machine Learning algorithms instead of parametric algorithms. However, the existence of strong confounders and/or Instrumental Variables (IVs) can lead the complex ML algorithms to provide perfect predictions for the treatment model which can violate the positivity assumption and elevate the variance of AIPW estimators. Thus the complexity of ML algorithms must be controlled to avoid perfect predictions for the treatment model while still learning the relationship between the confounders and the treatment and outcome. We use two NN architectures with an L1-regularization on specific NN parameters and investigate how their certain hyperparameters should be tuned in the presence of confounders and IVs to achieve a low bias-variance tradeoff for ATE estimators such as AIPW estimator. Through simulation results, we will provide recommendations as to how NNs can be employed for ATE estimation. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

15 pages, 1147 KiB  
Article
Simultaneous Maximum Likelihood Estimation for Piecewise Linear Instrumental Variable Models
by Shuo Shuo Liu and Yeying Zhu
Entropy 2022, 24(9), 1235; https://doi.org/10.3390/e24091235 - 2 Sep 2022
Cited by 2 | Viewed by 1934
Abstract
Analysis of instrumental variables is an effective approach to dealing with endogenous variables and unmeasured confounding issue in causal inference. We propose using the piecewise linear model to fit the relationship between the continuous instrumental variable and the continuous explanatory variable, as well [...] Read more.
Analysis of instrumental variables is an effective approach to dealing with endogenous variables and unmeasured confounding issue in causal inference. We propose using the piecewise linear model to fit the relationship between the continuous instrumental variable and the continuous explanatory variable, as well as the relationship between the continuous explanatory variable and the outcome variable, which generalizes the traditional linear instrumental variable models. The two-stage least square and limited information maximum likelihood methods are used for the simultaneous estimation of the regression coefficients and the threshold parameters. Furthermore, we study the limiting distribution of the estimators in the correctly specified and misspecified models and provide a robust estimation of the variance-covariance matrix. We illustrate the finite sample properties of the estimation in terms of the Monte Carlo biases, standard errors, and coverage probabilities via the simulated data. Our proposed model is applied to an education-salary data, which investigates the causal effect of children’s years of schooling on estimated hourly wage with father’s years of schooling as the instrumental variable. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

13 pages, 1902 KiB  
Article
High Resolution Treatment Effects Estimation: Uncovering Effect Heterogeneities with the Modified Causal Forest
by Hugo Bodory, Hannah Busshoff and Michael Lechner
Entropy 2022, 24(8), 1039; https://doi.org/10.3390/e24081039 - 28 Jul 2022
Cited by 8 | Viewed by 2900
Abstract
There is great demand for inferring causal effect heterogeneity and for open-source statistical software, which is readily available for practitioners. The mcf package is an open-source Python package that implements Modified Causal Forest (mcf), a causal machine learner. We replicate three well-known studies [...] Read more.
There is great demand for inferring causal effect heterogeneity and for open-source statistical software, which is readily available for practitioners. The mcf package is an open-source Python package that implements Modified Causal Forest (mcf), a causal machine learner. We replicate three well-known studies in the fields of epidemiology, medicine, and labor economics to demonstrate that our mcf package produces aggregate treatment effects, which align with previous results, and in addition, provides novel insights on causal effect heterogeneity. For all resolutions of treatment effects estimation, which can be identified, the mcf package provides inference. We conclude that the mcf constitutes a practical and extensive tool for a modern causal heterogeneous effects analysis. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

17 pages, 1497 KiB  
Article
The Paradox of Time in Dynamic Causal Systems
by Bob Rehder, Zachary J. Davis and Neil Bramley
Entropy 2022, 24(7), 863; https://doi.org/10.3390/e24070863 - 23 Jun 2022
Cited by 6 | Viewed by 1869
Abstract
Recent work has shown that people use temporal information including order, delay, and variability to infer causality between events. In this study, we build on this work by investigating the role of time in dynamic systems, where causes take continuous values and also [...] Read more.
Recent work has shown that people use temporal information including order, delay, and variability to infer causality between events. In this study, we build on this work by investigating the role of time in dynamic systems, where causes take continuous values and also continually influence their effects. Recent studies of learning in these systems explored short interactions in a setting with rapidly evolving dynamics and modeled people as relying on simpler, resource-limited strategies to grapple with the stream of information. A natural question that arises from such an account is whether interacting with systems that unfold more slowly might reduce the systematic errors that result from these strategies. Paradoxically, we find that slowing the task indeed reduced the frequency of one type of error, albeit at the cost of increasing the overall error rate. To explain these results we posit that human learners analyze continuous dynamics into discrete events and use the observed relationships between events to draw conclusions about causal structure. We formalize this intuition in terms of a novel Causal Event Abstraction model and show that this model indeed captures the observed pattern of errors. We comment on the implications these results have for causal cognition. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

19 pages, 1750 KiB  
Article
Testability of Instrumental Variables in Linear Non-Gaussian Acyclic Causal Models
by Feng Xie, Yangbo He, Zhi Geng, Zhengming Chen, Ru Hou and Kun Zhang
Entropy 2022, 24(4), 512; https://doi.org/10.3390/e24040512 - 5 Apr 2022
Cited by 2 | Viewed by 2346
Abstract
This paper investigates the problem of selecting instrumental variables relative to a target causal influence XY from observational data generated by linear non-Gaussian acyclic causal models in the presence of unmeasured confounders. We propose a necessary condition for detecting variables that [...] Read more.
This paper investigates the problem of selecting instrumental variables relative to a target causal influence XY from observational data generated by linear non-Gaussian acyclic causal models in the presence of unmeasured confounders. We propose a necessary condition for detecting variables that cannot serve as instrumental variables. Unlike many existing conditions for continuous variables, i.e., that at least two or more valid instrumental variables are present in the system, our condition is designed with a single instrumental variable. We then characterize the graphical implications of our condition in linear non-Gaussian acyclic causal models. Given that the existing graphical criteria for the instrument validity are not directly testable given observational data, we further show whether and how such graphical criteria can be checked by exploiting our condition. Finally, we develop a method to select the set of candidate instrumental variables given observational data. Experimental results on both synthetic and real-world data show the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

23 pages, 556 KiB  
Article
Normalized Augmented Inverse Probability Weighting with Neural Network Predictions
by Mehdi Rostami and Olli Saarela
Entropy 2022, 24(2), 179; https://doi.org/10.3390/e24020179 - 25 Jan 2022
Cited by 4 | Viewed by 2731
Abstract
The estimation of average treatment effect (ATE) as a causal parameter is carried out in two steps, where in the first step, the treatment and outcome are modeled to incorporate the potential confounders, and in the second step, the predictions are inserted into [...] Read more.
The estimation of average treatment effect (ATE) as a causal parameter is carried out in two steps, where in the first step, the treatment and outcome are modeled to incorporate the potential confounders, and in the second step, the predictions are inserted into the ATE estimators such as the augmented inverse probability weighting (AIPW) estimator. Due to the concerns regarding the non-linear or unknown relationships between confounders and the treatment and outcome, there has been interest in applying non-parametric methods such as machine learning (ML) algorithms instead. Some of the literature proposes to use two separate neural networks (NNs) where there is no regularization on the network’s parameters except the stochastic gradient descent (SGD) in the NN’s optimization. Our simulations indicate that the AIPW estimator suffers extensively if no regularization is utilized. We propose the normalization of AIPW (referred to as nAIPW) which can be helpful in some scenarios. nAIPW, provably, has the same properties as AIPW, that is, the double-robustness and orthogonality properties. Further, if the first-step algorithms converge fast enough, under regulatory conditions, nAIPW will be asymptotically normal. We also compare the performance of AIPW and nAIPW in terms of the bias and variance when small to moderate L1 regularization is imposed on the NNs. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

24 pages, 420 KiB  
Article
Conducting Causal Analysis by Means of Approximating Probabilistic Truths
by Bo Pieter Johannes Andrée
Entropy 2022, 24(1), 92; https://doi.org/10.3390/e24010092 - 6 Jan 2022
Cited by 2 | Viewed by 1925
Abstract
The current paper develops a probabilistic theory of causation using measure-theoretical concepts and suggests practical routines for conducting causal inference. The theory is applicable to both linear and high-dimensional nonlinear models. An example is provided using random forest regressions and daily data on [...] Read more.
The current paper develops a probabilistic theory of causation using measure-theoretical concepts and suggests practical routines for conducting causal inference. The theory is applicable to both linear and high-dimensional nonlinear models. An example is provided using random forest regressions and daily data on yield spreads. The application tests how uncertainty in short- and long-term inflation expectations interacts with spreads in the daily Bitcoin price. The results are contrasted with those obtained by standard linear Granger causality tests. It is shown that the suggested measure-theoretic approaches do not only lead to better predictive models, but also to more plausible parsimonious descriptions of possible causal flows. The paper concludes that researchers interested in causal analysis should be more aspirational in terms of developing predictive capabilities, even if the interest is in inference and not in prediction per se. The theory developed in the paper provides practitioners guidance for developing causal models using new machine learning methods that have, so far, remained relatively underutilized in this context. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
20 pages, 858 KiB  
Article
Causal Discovery in High-Dimensional Point Process Networks with Hidden Nodes
by Xu Wang and Ali Shojaie
Entropy 2021, 23(12), 1622; https://doi.org/10.3390/e23121622 - 1 Dec 2021
Cited by 2 | Viewed by 2332
Abstract
Thanks to technological advances leading to near-continuous time observations, emerging multivariate point process data offer new opportunities for causal discovery. However, a key obstacle in achieving this goal is that many relevant processes may not be observed in practice. Naïve estimation approaches that [...] Read more.
Thanks to technological advances leading to near-continuous time observations, emerging multivariate point process data offer new opportunities for causal discovery. However, a key obstacle in achieving this goal is that many relevant processes may not be observed in practice. Naïve estimation approaches that ignore these hidden variables can generate misleading results because of the unadjusted confounding. To plug this gap, we propose a deconfounding procedure to estimate high-dimensional point process networks with only a subset of the nodes being observed. Our method allows flexible connections between the observed and unobserved processes. It also allows the number of unobserved processes to be unknown and potentially larger than the number of observed nodes. Theoretical analyses and numerical studies highlight the advantages of the proposed method in identifying causal interactions among the observed processes. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

17 pages, 396 KiB  
Article
Interventional Fairness with Indirect Knowledge of Unobserved Protected Attributes
by Sainyam Galhotra, Karthikeyan Shanmugam, Prasanna Sattigeri and Kush R. Varshney
Entropy 2021, 23(12), 1571; https://doi.org/10.3390/e23121571 - 25 Nov 2021
Cited by 3 | Viewed by 1988
Abstract
The deployment of machine learning (ML) systems in applications with societal impact has motivated the study of fairness for marginalized groups. Often, the protected attribute is absent from the training dataset for legal reasons. However, datasets still contain proxy attributes that capture protected [...] Read more.
The deployment of machine learning (ML) systems in applications with societal impact has motivated the study of fairness for marginalized groups. Often, the protected attribute is absent from the training dataset for legal reasons. However, datasets still contain proxy attributes that capture protected information and can inject unfairness in the ML model. Some deployed systems allow auditors, decision makers, or affected users to report issues or seek recourse by flagging individual samples. In this work, we examine such systems and consider a feedback-based framework where the protected attribute is unavailable and the flagged samples are indirect knowledge. The reported samples are used as guidance to identify the proxy attributes that are causally dependent on the (unknown) protected attribute. We work under the causal interventional fairness paradigm. Without requiring the underlying structural causal model a priori, we propose an approach that performs conditional independence tests on observed data to identify such proxy attributes. We theoretically prove the optimality of our algorithm, bound its complexity, and complement it with an empirical evaluation demonstrating its efficacy on various real-world and synthetic datasets. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

21 pages, 1221 KiB  
Article
Causal Algebras on Chain Event Graphs with Informed Missingness for System Failure
by Xuewen Yu and Jim Q. Smith
Entropy 2021, 23(10), 1308; https://doi.org/10.3390/e23101308 - 6 Oct 2021
Cited by 6 | Viewed by 1889
Abstract
Graph-based causal inference has recently been successfully applied to explore system reliability and to predict failures in order to improve systems. One popular causal analysis following Pearl and Spirtes et al. to study causal relationships embedded in a system is to use a [...] Read more.
Graph-based causal inference has recently been successfully applied to explore system reliability and to predict failures in order to improve systems. One popular causal analysis following Pearl and Spirtes et al. to study causal relationships embedded in a system is to use a Bayesian network (BN). However, certain causal constructions that are particularly pertinent to the study of reliability are difficult to express fully through a BN. Our recent work demonstrated the flexibility of using a Chain Event Graph (CEG) instead to capture causal reasoning embedded within engineers’ reports. We demonstrated that an event tree rather than a BN could provide an alternative framework that could capture most of the causal concepts needed within this domain. In particular, a causal calculus for a specific type of intervention, called a remedial intervention, was devised on this tree-like graph. In this paper, we extend the use of this framework to show that not only remedial maintenance interventions but also interventions associated with routine maintenance can be well-defined using this alternative class of graphical model. We also show that the complexity in making inference about the potential relationships between causes and failures in a missing data situation in the domain of system reliability can be elegantly addressed using this new methodology. Causal modelling using a CEG is illustrated through examples drawn from the study of reliability of an energy distribution network. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

13 pages, 432 KiB  
Article
The Role of Instrumental Variables in Causal Inference Based on Independence of Cause and Mechanism
by Nataliya Sokolovska and Pierre-Henri Wuillemin
Entropy 2021, 23(8), 928; https://doi.org/10.3390/e23080928 - 21 Jul 2021
Cited by 4 | Viewed by 2444
Abstract
Causal inference methods based on conditional independence construct Markov equivalent graphs and cannot be applied to bivariate cases. The approaches based on independence of cause and mechanism state, on the contrary, that causal discovery can be inferred for two observations. In our contribution, [...] Read more.
Causal inference methods based on conditional independence construct Markov equivalent graphs and cannot be applied to bivariate cases. The approaches based on independence of cause and mechanism state, on the contrary, that causal discovery can be inferred for two observations. In our contribution, we pose a challenge to reconcile these two research directions. We study the role of latent variables such as latent instrumental variables and hidden common causes in the causal graphical structures. We show that methods based on the independence of cause and mechanism indirectly contain traces of the existence of the hidden instrumental variables. We derive a novel algorithm to infer causal relationships between two variables, and we validate the proposed method on simulated data and on a benchmark of cause-effect pairs. We illustrate by our experiments that the proposed approach is simple and extremely competitive in terms of empirical accuracy compared to the state-of-the-art methods. Full article
(This article belongs to the Special Issue Causal Inference for Heterogeneous Data and Information Theory)
Show Figures

Figure 1

Back to TopTop