Next Article in Journal
Single-Cell Reprogramming in Mouse Embryo Development through a Critical Transition State
Next Article in Special Issue
On Lower Bounds for Statistical Learning Theory
Previous Article in Journal
Instance Selection for Classifier Performance Estimation in Meta Learning
Previous Article in Special Issue
Entropy Ensemble Filter: A Modified Bootstrap Aggregating (Bagging) Procedure to Improve Efficiency in Ensemble Model Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discovering Potential Correlations via Hypercontractivity †

1
Coordinated Science Lab, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
2
Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
3
Department of Electrical Engineering, University of Washington, Seattle, WA 98195, USA
4
Department of Industrial and Enterprise Systems Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper given at the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017.
Entropy 2017, 19(11), 586; https://doi.org/10.3390/e19110586
Submission received: 11 September 2017 / Revised: 18 October 2017 / Accepted: 30 October 2017 / Published: 2 November 2017
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)

Abstract

:
Discovering a correlation from one variable to another variable is of fundamental scientific and practical interest. While existing correlation measures are suitable for discovering average correlation, they fail to discover hidden or potential correlations. To bridge this gap, (i) we postulate a set of natural axioms that we expect a measure of potential correlation to satisfy; (ii) we show that the rate of information bottleneck, i.e., the hypercontractivity coefficient, satisfies all the proposed axioms; (iii) we provide a novel estimator to estimate the hypercontractivity coefficient from samples; and (iv) we provide numerical experiments demonstrating that this proposed estimator discovers potential correlations among various indicators of WHO datasets, is robust in discovering gene interactions from gene expression time series data, and is statistically more powerful than the estimators for other correlation measures in binary hypothesis testing of canonical examples of potential correlations.

1. Introduction

Measuring the strength of an association between two random variables is a fundamental topic of broad scientific interest. Pearson’s correlation coefficient [1] dates from over a century ago and has been generalized seven decades ago as maximal correlation (mCor) to handle nonlinear dependencies [2,3,4]. Novel correlation measures to identify different kinds of associations continue to be proposed in the literature; these include maximal information coefficient (MIC) [5] and distance correlation (dCor) [6]. Despite the differences, a common theme of measurement of the empirical average dependence unites the different dependence measures. Alternatively, these are factual measures of dependence and their relevance is restricted when we seek a potential dependence of one random variable on another. For instance, consider a hypothetical city with very few smokers. A standard measure of correlation on the historical data in this town on smoking and lung cancer will fail to discover the fact that smoking causes cancer, since the average correlation is very small. On the other hand, clearly, there is a potential correlation between smoking and lung cancer; indeed applications of this nature abound in several scenarios in modern data science, including a recent one on genetic pathway discovery [7].
Discovery of a potential correlation naturally leads one to ask for a measure of potential correlation that is statistically well-founded and addresses practical needs. Such is the focus of this work, where our proposed measure of potential correlation is based on a novel interpretation of the Information Bottleneck (IB) principle [8]. The IB principle has been used to address one of the fundamental tasks in supervised learning: given samples { X i , Y i } i = 1 n , how do we find a compact summary of a variable X that is most informative in explaining another variable Y. The output of the IB principle is a compact summary of X that is most relevant to Y and has a wide range of applications [9,10].
We use this IB principle to create a measure of correlation based on the following intuition: if X is (potentially) correlated with Y, then a relatively compact summary of X can still be very informative about Y. In other words, the maximal ratio of how informative a summary can be in explaining Y to how compact a summary is with respect to X is, conceptually speaking, an indicator of potential correlation from X to Y. Quantifying the compactness by I ( U ; X ) and the information by I ( U ; Y ) we consider the rate of information bottleneck as a measure of potential correlation:
s ( X ; Y ) sup U X Y I ( U ; Y ) I ( U ; X ) ,
where U X Y forms a Markov chain and the supremum is over all summaries U of X. This intuition is made precise in Section 2, where we formally define a natural notion of potential correlation (Axiom 6), and show that the rate of information bottleneck s ( X ; Y ) captures this potential correlation (Theorem 1) while other standard measures of correlation fail (Theorem 2).
This ratio has only recently been identified as the hypercontractivity coefficient [11]. Hypercontractivity has a distinguished and central role in a large number of technical arenas including quantum physics [12,13], theoretical computer science [14,15], mathematics [16,17,18] and probability theory [19,20]. In this paper, we provide a novel interpretation to the hypercontractivity coefficient as a measure of potential correlation by demonstrating that it satisfies a natural set of axioms such a measure is expected to obey.
For practical use in discovering correlations, the standard correlation coefficients are equipped with corresponding natural sample-based estimators. However, for hypercontractivity coefficient, estimating it from samples is widely acknowledged to be challenging, especially for continuous random variables [21,22,23]. There is no existing algorithm to estimate the hypercontractivity coefficient in general [21], and there is no existing algorithm for solving IB from samples either [22,23]. We provide a novel estimator of the hypercontractivity coefficient—the first of its kind—by bringing together the recent theoretical discoveries in [11,24] of an alternate definition of hypercontractivity coefficient as ratio of Kullback–Leibler divergences defined in (10), and recent advances in joint optimization (the max step in Equation (1)) and estimating information measures from samples using importance sampling [25].
Our main contributions are the following:
  • We postulate a set of natural axioms that a measure of potential correlation from X to Y should satisfy (Section 2.1).
  • We show that s ( X ; Y ) , our proposed measure of potential correlation, satisfies all the axioms we postulate (Section 2.2). In comparison, we prove that existing standard measures of correlation not only fail to satisfy the proposed axioms, but also fail to capture canonical examples of potential correlations captured by s ( X ; Y ) (Section 2.3). Another natural candidate is mutual information, but it is not clear how to interpret the value of mutual information as it is unnormalized, unlike all other measures of correlation which are between zero and one.
  • Computation of the hypercontractivity coefficient from samples is known to be a challenging open problem. We in troduce a novel estimator to compute hypercontractivity coefficient from i.i.d. samples in a statistically consistent manner for continuous random variables, using ideas from importance sampling and kernel density estimation (Section 3).
  • In a series of synthetic experiments, we show empirically that our estimator for the hypercontractivity coefficient is statistically more powerful in discovering a potential correlation than existing correlation estimators; a larger power means a larger successful detection rate for a fixed false alarm rate (Section 4.1).
  • We show applications of our estimator of hypercontractivity coefficient in two important datasets: In Section 4.2, we demonstrate that it discovers hidden potential correlations among various national indicators in WHO datasets, including how aid is potentially correlated with the income growth. In Section 4.3, we consider the following gene pathway recovery problem: we are given samples of four gene expressions time series. Assuming we know that gene A causes B, that B causes C, and that C causes D, the problem is to discover that these causations occur in the sequential order: A to B, and then B to C, and then C to D. We show empirically that the estimator of the hypercontractivity coefficient recovers this order accurately from a vastly smaller number of samples compared to other state-of-the art causal influence estimators.

2. Axiomatic Approach to Measure Potential Correlations

We propose a set of axioms that we expect a measure of potential correlation to satisfy. We then show that hypercontractivity coefficient, first introduced in [19], satisfies all the proposed axioms, hence propose hypercontractivity coefficient as a measure of potential correlation. We also show that other standard correlation coefficients and mutual information, on the other hand, violate the proposed axioms.

2.1. Axioms for Potential Correlation

We postulate that a measure of potential correlation ρ : X × Y [ 0 , 1 ] between two random variables X X and Y Y should satisfy:
  • ρ ( X , Y ) is defined for any pair of non-constant random variables X and Y.
  • 0 ρ ( X , Y ) 1 .
  • ρ ( X , Y ) = 0 iff X and Y are statistically independent.
  • For bijective Borel-measurable functions f , g : R R , ρ ( X , Y ) = ρ ( f ( X ) , g ( Y ) ) .
  • If ( X , Y ) N ( μ , Σ ) , then ρ ( X , Y ) = | ρ | , where ρ is the Pearson correlation coefficient.
  • ρ ( X , Y ) = 1 if there exists a subset X r X such that for a pair of continuous random variables ( X , Y ) X r × Y , Y = f ( X ) for a Borel-measurable and non-constant continuous function f.
Axioms 1–5 are identical to a subset of the celebrated axioms of Rényi in [4], which ensure that the measure is properly normalized and invariant under bijective transformations, and recovers the Pearson correlation for jointly Gaussian random variables. Rényi’s original axioms for a measure of correlation in [4] included Axioms 1–5 and also that the measure ρ of correlation should satisfy
6’.
ρ ( X , Y ) = 1 if for Borel-measurable functions f or g, Y = f ( X ) or X = g ( Y ) .
7’.
ρ ( X ; Y ) = ρ ( Y ; X ) .
The Pearson correlation violates a subset (3, 4, and 6’) of Rényi’s axioms. Together with recent empirical successes in multimodal deep learning (e.g., [26,27,28]), Rényi’s axiomatic approach has been a major justification of Hirschfeld–Gebelein–Rényi (HGR) maximal correlation coefficient defined as mCor ( X , Y ) : = sup f , g E [ f ( X ) g ( Y ) ] , which satisfies all Rényi’s axioms [2]. Here, the supremum is over all measurable functions with E [ f ( X ) ] = E [ g ( Y ) ] = 0 and E [ f 2 ( X ) ] = E [ g 2 ( Y ) ] = 1 . However, maximal correlation is not the only measure satisfying all of Rényi’s axioms, as we show in the following.
Proposition 1.
For any function F : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] satisfying F ( x , y ) = F ( y , x ) , F ( x , x ) = x , and F ( x , y ) = 0 only if x y = 0 , the symmetrized F ( s ( X ; Y ) , s ( Y ; X ) ) satisfies all Rényi’s axioms.
This follows from the fact that the hypercontractivity coefficient s ( X ; Y ) satisfies all but the symmetry in Axiom 7’ (Theorem 1), and it follows that a symmetrized version satisfies all axioms, e.g., ( 1 / 2 ) ( s ( X ; Y ) + s ( Y ; X ) ) and ( s ( X ; Y ) s ( Y ; X ) ) 1 / 4 . A formal proof is provided in Section 5.1.
From the original Rényi’s axioms, for a potential correlation measure, we remove Axiom 7’ that ensures symmetry, as directionality is fundamental in measuring the potential correlation from X to Y. We further replace Axiom 6’ by Axiom 6, as a variable X has a full potential to be correlated with Y if there exists a domain X r such that X and Y are deterministically dependent and non-degenerate (i.e., not a constant function), as illustrated in Figure 1 for a linear function and a quadratic function.

2.2. The Hypercontractivity Coefficient Satisfies All Axioms

We show that the hypercontractivity coefficient defined in Equation (1) satisfies all Axioms 1–6. Intuitively, s ( X ; Y ) measures how much potential correlation X has with Y. For example, if X and Y are independent, then s ( X ; Y ) = 0 as X has no correlation with Y (Axiom 3). By data processing inequality, it follows that it is a measure between zero and one (Axiom 2) and also invariant under bijective transformations (Axiom 4). For jointly Gaussian variables X and Y with the Pearson correlation ρ , we can show that s ( X ; Y ) = s ( Y ; X ) = ρ 2 . Hence, the squared-root of s ( X ; Y ) satisfies Axiom 5. In fact, s ( X ; Y ) satisfies all desired axioms for potential correlation, and we make this precise in the following theorem whose proof is provided in Section 5.2.
Theorem 1.
Hypercontractivity coefficient s ( X ; Y ) satisfies Axioms 1–6.
In particular, the hypercontractivity coefficient satisfies Axiom 6 for potential correlation, unlike other measures of correlation (see Theorem 2 for examples). If there is a potential for X in a possibly rare regime in X to be fully correlated with Y such that Y = f ( X ) , then the hypercontractivity coefficient is maximum: s ( X ; Y ) = 1 . In the following section, we show that existing correlation measures, on the other hand, violate the proposed axioms.

2.3. Standard Correlation Coefficients Violate the Axioms

We analyze existing measures of correlations under the scenario with potential correlation (Axiom 6), where we find that none of the existing correlation measures satisfy Axiom 6. Suppose X and Y are independent (i.e., no correlation) in a subset X d of the domain X , and allow X and Y to be arbitrarily correlated in the rest X r of the domain, such that X = X d X r . We further assume that the independent part is dominant and the correlated part is rare; let α : = P ( X X r ) and we consider the scenario when α is small. A good measure of potential correlation is expected to capture the correlation in X r even if it is rare (i.e., α is small). To make this task more challenging, we assume that the conditional distribution of Y | { X X r } is the same as Y | { X X r } . Figure 1 illustrates sampled points for two examples from such a scenario and more examples are in Figure 3. Our main result is the analysis of HGR maximal correlation (mCor) [2], distance correlation (dCor) [6], maximal information coefficients (MIC) [5], which shows that these measures are vanishing with α even if the dependence in the rare regime is very high. Suppose Y | ( X X r ) = f ( X ) , then all three correlation coefficients are vanishing as α gets small. This in particular violates Axiom 6. The reason is that standard correlation coefficients measure the average correlation whereas the hypercontractivity coefficient measures the potential correlation. The experimental comparisons on the power of these measures confirm our analytical predictions in Figure 4 in Section 4. The formal statement is below and the proof is provided in Section 5.3.
Theorem 2.
Consider a pair of continuous random variables ( X , Y ) X × Y . Suppose X is partitioned as X r X d = X such that P Y | X ( S | X X r ) = P Y | X ( S | X X d ) for all S Y , and Y is independent of X for X X d . Let α = P { X X r } . The HGR maximal correlation coefficient is
mCor ( X , Y ) = α mCor ( X r , Y ) ,
the distance correlation coefficient is
dCor ( X , Y ) = α dCor ( X r , Y ) ,
the maximal information coefficient is upper bounded by
MIC ( X , Y ) α MIC ( X r , Y ) ,
where X r is the random variable X conditioned on the rare domain X X r .
Under the rare/dominant scenario considered in Theorem 2, s ( X ; Y ) mCor 2 ( X ; Y ) . It is well known that this inequality holds for any X and Y [19]. In particular, Theorem 3 in [29] shows that hypercontractivity coefficient is a natural extension of the popular HGR maximal correlation coefficient as follows.
Remark 1
(Connection between s ( X ; Y ) and mCor ( X , Y ) [29]). The squared HGR maximal correlation is a special case of the hypercontractivity optimization in Equation (10) restricted to searching over a distribution r ( x ) in a close neighborhood of p ( x ) .
As s ( X ; Y ) searches over a larger space, it is always larger than or equal to mCor 2 ( X ; Y ) . This gives an intuitive justification for using s ( X ; Y ) as a measure of potential influence; we allow search over larger space, but properly normalized by the KL divergence, in a hope to find a potential distribution r ( x ) that can influence Y significantly. While hypercontractivity coefficient is a natural extension of HGR maximal correlation coefficient, there is an important difference between hypercontractivity coefficient and HGR maximal correlation coefficient (and other correlation measures); hypercontractivity is directional.
Remark 2
(Asymmetry of s ( X ; Y ) ). Hypercontractivity coefficient is asymmetric in X and Y while HGR maximal correlation, distance correlation, and MIC are symmetric.
Under the rare/dominant scenario considered in Theorem 2, the hypercontracitivy coefficient s ( X ; Y ) is large because it measures the potential correlation from X to Y. On the other hand, inverse hypercontractivity coefficient s ( Y ; X ) , which measures the potential correlation from Y to X, is small as there is no apparent potential correlation from Y to X. This is made precise in the following proposition, with its proof in Section 5.4.
Proposition 2.
Under the hypotheses of Theorem 2, the hypercontractivity coefficient from Y to X is
s ( Y ; X ) = α s ( Y ; X r ) ,
where X r is the random variable X conditioned on the rare domain X X r .

2.4. Mutual Information Violates the Axioms

Beside standard correlation measures, another measure widely used to quantify the strength of dependence is mutual information. We can show that mutual information satisfies Axiom 6 if we replace 1 by . However there are two key problems:
  • Practically, mutual information is unnormalized, i.e., I ( X ; Y ) [ 0 , ) . Hence, it provides no absolute indication of the strength of the dependence.
  • Mathematically, we are looking for a quantity that tensorizes, i.e., does not change when there are many i.i.d. copies of the same pair of random variables.
Remark 3
(Tensorization property of s ( X ; Y ) [30]). Hypercontractivity coefficient tensorizes, i.e.,
s ( X 1 , , X n ; Y 1 , , Y n ) = s ( X 1 , Y 1 ) , for i . i . d . ( X i , Y i ) , i = 1 , , n .
On the other hand, mutual information is additive, i.e.,
I ( X 1 , , X n ; Y 1 , , Y n ) = n I ( X 1 ; Y 1 ) , for i . i . d . ( X i , Y i ) , i = 1 , , n .
Tensorizing quantities capture the strongest relationship among independent copies while additive quantities capture the sum. For instance, mutual information could be large because a small amount of information accumulates over many of the independent components of X and Y (when X and Y are high dimensional) while tensorizing quantities would rule out this scenario, where there is no strong dependence. When the components are not independent, hypercontractivity indeed pools information from different components to find the strongest direction of dependence, which is a desirable property.
One natural way to normalize mutual information is by the log of the cardinality of the input/output alphabets [31]. One can interpret a popular correlation measure MIC as a similar effort for normalizing mutual information and is one of our baselines.
Given that other correlation measures and mutual information do not satisfy our axioms, a natural question to ask is whether hypercontractivity is a unique solution that satisfies all the proposed axioms. In the following, we show that the hypercontractivity coefficient is not the only one satisfying all the proposed axioms—just as HGR correlation is not the only measure satisfying Rényi’s original axioms.

2.5. Hypercontractivity Ribbon

We show that a family of measures known as hypercontractivity ribbon, which includes hypercontractivity coefficient as a special case, satisfy all the axioms. The hypercontractivity ribbon [19,32] is a class of measures parametrized by α > 0 as
r α ( X ; Y ) = sup r ( x , y ) p ( x , y ) D ( r ( y ) p ( y ) ) D ( r ( x ) p ( x ) ) + α D ( r ( y | x ) p ( y | x ) ) ,
where D ( r ( x ) | | p ( x ) ) denotes the KL divergence of r ( x ) and p ( x ) . An alternative characterization of hypercontractivity ribbon in terms of mutual information is provided in [24,32];
r α ( X ; Y ) = sup p ( u | x , y ) I ( U ; Y ) I ( U ; X ) + α I ( U ; Y | X ) .
from which we can see that hypercontractivity coefficient is a special case of hypercontractivity ribbon [11]:
s ( X ; Y ) = lim α r α ( X ; Y ) = lim α s α ( X ; Y ) .
Proposition 3.
The (re-parameterized) hypercontractivity ribbon s α ( X ; Y ) : = ( α r α ( X ; Y ) 1 ) / ( α 1 ) , for α > 1 , satisfies Axioms 1–6.
Proof. 
By definition, s α ( X ; Y ) is defined for any pair of non-constant random variables (Axiom 1) and is between 0 and 1 by data processing inequality (Axiom 2). We can show that s α ( X ; Y ) satisfies Axioms 3 and 4, in a similar way to show s ( X ; Y ) satisfies Axioms 3 and 4. Also, s α ( X ; Y ) = ρ 2 for a jointly Gaussian X, Y with Pearson correlation ρ [24] (Axiom 5). Finally, s α ( X ; Y ) satisfies Axiom 6 because r α ( X ; Y ) is non-increasing in α , which implies that s α ( X ; Y ) = r α ( X ; Y ) = 1 if s ( X ; Y ) = 1 . ☐
Although hypercontracitivy ribbon satisfies all axioms, a few properties of the hypercontractivity coefficient make it more attractive than hypercontractivity ribbon for practical use; hypercontractivity coefficient can be efficiently estimated from samples (see Section 3). Hypercontractivity coefficient is a natural extension of the popular HGR maximal correlation coefficient (Remark 1).

2.6. Multidimensional X and Y

In this section, we discuss potential correlation of multidimensional X and Y. While most of the correlation coefficients, including the hypercontractivity coefficient, are well-defined for multi-dimensional X and Y, the axioms are specific to univariate X and Y. To bridge this gap, we propose replacing Axiom 5, as this is the only axiom specific to univariate random variables.
Axiom 5’.
If ( X , Y ) N μ , Σ = Σ X Σ X Y Σ Y X Σ Y , then ρ ( X , Y ) = Σ X 1 / 2 Σ X Y Σ Y 1 / 2 , where · is the spectral norm of a matrix.
This recovers the original Axiom 5 when restricted to univariate X and Y. This naturally generalizes both Rényi’s axioms and the proposed potential correlation axioms to multidimensional X and Y.
Proposition 4.
Axiom 5’, together with original Rényi’s Axioms 1–4, 6’, and 7’, recovers maximal correlation (mCor) as a measure satisfying all Axioms even in this multi-dimensional case. Axiom 5’, together with our proposed Axioms 1–4, and 6, recovers the hypercontractivity coefficient s ( X ; Y ) as a measure satisfying all axioms.
The second statement in the proposition follows from the analyses of the hypercontractivity coefficient of Gaussian distributions in [33]. A formal proof is provided in Section 5.7.

2.7. Noisy, Discrete, Noisy and Discrete Potential Correlation

In this section, we consider more general scenarios of potential correlation than the one in Axiom 6. We consider (i) noisy potential correlation where Y = f ( X ) + Z for a Gaussian noise Z for ( X , Y ) X r × Y , (ii) discrete potential correlation, where X r = { 1 , , k } , and (iii) noisy discrete potential correlation—a random corruption model. For these three examples, we obtain a lower bound on s ( X ; Y ) .
Example 1.
Suppose for ( X , Y ) X r × Y , ( X , Y ) N ( 0 , Σ ) for
Σ = 1 ρ ρ 1 .
Then
s ( X ; Y ) log 1 1 ρ 2 + log 1 1 + ρ 2 log 1 1 ρ 2 + H ( α ) α
Proof is in Section 5.5.
We now consider for discrete ( X , Y ) . We start with the case for which X and Y are perfectly correlated for ( X , Y ) X r × Y .
Example 2.
Suppose that for a pair of discrete random variables ( X , Y ) X × Y , there exists a subset X r = { 1 , 2 , , k } X for which P { X X r } = α and X | { X X r } Unif [ 1 : k ] and Y = X for X X r . Then,
s ( X ; Y ) log k log k + log ( 1 / α ) .
The inequality holds by considering r ( x ) = I { X = 1 } in (10).
We conjecture this lower bound is indeed tight for α 0.5 based on numerical simulations. From this lower-bound, we can see the trade-off between k and α . As k , the lower bounds approaches to 1. As α 1 , the lower bound approaches to 1. As α 0 , the lower bound approaches to 0. In the following, we consider the case where X and Y are not perfectly correlated in ( X r × Y ) for discrete ( X , Y ) . In particular, we consider a random corruption model for ( X r × Y ) and obtain a lower bound on s ( X ; Y ) .
Example 3.
Consider a random corruption noise model for ( X , Y ) X r × Y , i.e.,
Y = X r w . p . 1 k k 1 ϵ , Unif [ 1 : k ] w . p . 1 k 1 ϵ .
Then
s ( X ; Y ) ( 1 ϵ ) log k ( 1 ϵ ) + ϵ log k ϵ / ( k 1 ) log ( k / α ) = log k H 2 ( ϵ ) ϵ log ( k 1 ) log ( k / α ) .
On the other hand,
mCor 2 ( X ; Y ) = α 1 k k 1 ϵ 2 , 0 ϵ k 1 k .
Proof is in Section 5.6.
In Figure 2, we show plots of lower bounds on s ( X ; Y ) and mCor ( X ; Y ) in Examples 1–3; from these figures, we can see that s ( X ; Y ) increases as ρ 1 and k , and s ( X ; Y ) is larger than mCor ( X ; Y ) for ρ 1 and large k.

3. Estimator of the Hypercontractivity Coefficient from Samples

In this section, we present an algorithm to compute the hypercontractivity coefficient s ( X ; Y ) from i.i.d. samples { X i , Y i } i = 1 n . The computation of the hypercontractivity coefficient from samples is known to be challenging for continuous random variables [22,23], and to the best of our knowledge, there is no known efficient algorithm to compute the hypercontractivity coefficient from samples. Our estimator is the first efficient algorithm to compute the hypercontractivity coefficient, based on the following equivalent definition of the hypercontractivity coefficient, shown recently in [11]:
s ( X ; Y ) sup r x p x D ( r y | | p y ) D ( r x | | p x ) .
There are two main challenges for computing s ( X ; Y ) . The first challenge is – given a marginal distribution r x and samples from p x y , how do we estimate the KL divergences D ( r y | | p y ) and D ( r x | | p x ) . The second challenge is the optimization over the infinite dimensional simplex. We need to combine estimation and optimization together in order to compute s ( X ; Y ) . Our approach is to combine ideas from traditional kernel density estimates and from importance sampling. Let w i = r x ( X i ) / p x ( X i ) be the likelihood ratio evaluated at sample i. We propose the estimation and optimization be solved jointly as follows:
Estimation: To estimate KL divergence D ( r x | | p x ) , notice that
D ( r x | | p x ) = E X p x r x ( X ) p x ( X ) log r x ( X ) p x ( X ) .
Using empirical average to replace the expectation over p x , we propose
D ^ ( r x | | p x ) = 1 n i = 1 n r x ( X i ) p x ( X i ) log r x ( X i ) p x ( X i ) = 1 n i = 1 n w i log w i .
For D ( r y | | p y ) , we follow the similar idea, but the challenge is in computing v j = r y ( Y j ) / p y ( Y j ) . To do this, notice that r x y = r x p y | x , so
r y ( Y j ) = E X r x p y | x ( Y j | X ) = E X p x p y | x ( Y j | X ) r x ( X ) p x ( X ) .
Replacing the expectation by empirical average again, we get the following estimator of v j :
v ^ j = 1 n i = 1 n p y | x ( Y j | X i ) p y ( Y j ) r x ( X i ) p x ( X i ) = 1 n i = 1 n p x y ( X i , Y j ) p x ( X i ) p y ( Y j ) A j i w i .
We can write this expression in matrix form as v ^ = A T w . We use a kernel density estimator from [34] to estimate the matrix A , but our approach is compatible with any density estimator of choice.
Optimization: Given the estimators of the KL divergences, we are able to convert the problem of computing s ( X ; Y ) into an optimization problem over the vector w . Here a constraint of ( 1 / n ) i = 1 n w i = 1 is needed to satisfy E p x [ r x / p x ] = 1 . To improve numerical stability, we use log s ( X ; Y ) as the objective function. Then the optimization problem has the following form:
max w log ( w T A log ( A T w ) log w T log w subject to 1 n i = 1 n w i = 1 w i 0 , i
where w T log w = i = 1 n w i log w i for short. Although this problem is not convex, we apply gradient descent to maximize the objective. In practice, we initialize w i = 1 + N ( 0 , σ 2 ) for σ 2 = 0.01 . Hence, the initial r x is perturbed mildly from p x . Although we are not guaranteed to achieve the global maximum, we consistently observe in extensive numerical experiments that we have 50–60% probability of achieving the same maximum value, which we believed to be the global maximum. A theoretical analysis of the landscape of local and global optima and their regions of attraction with respect to gradient descent is an interesting and challenging open question, outside the scope of this paper.

Consistency of Estimation

While a theoretical understanding of the performance of gradient descent on the optimization step (where the number of samples is fixed) above is technically very challenging, we can study the performance of the solution as the number of samples increases. In particular we show below (under suitable simplifying assumptions to get to the essence of the proof) that the optimal solution to the finite sample optimization problem is consistent. Suppose that X is discrete. Further we restrict the optimization over a quantized and bounded set T Δ , where w T Δ is quantized by a gap Δ and satisfies: (1) C 1 w i C 2 for all i; (2) ( 1 / n ) i = 1 n w i log w i > C 0 . We further assume that we have access of A = P x y ( X i , Y j ) / P x ( X i ) P y ( Y j ) . Define s ^ Δ ( X ; Y ) = max w T Δ w T A log ( A T w ) / w T log w , then with two further simplifying conditions on the joint distribution (formally stated in Section 5.8), we can prove consistency of our estimation procedure:
Theorem 3.
As n goes to infinity, s ^ Δ ( X ; Y ) converges to s ( X ; Y ) up to a resolution of quantization in probability, i.e., for any ε > 0 , Δ > 0 and s ( Δ ) = O ( Δ ) , we have
lim n P | s ^ Δ ( X ; Y ) s ( X ; Y ) | > ε + s ( Δ ) = 0 .

4. Experimental Results

We present experimental results on synthetic and real datasets showing that the hypercontractivity coefficient ( a ) is more powerful in detecting potential correlation compared to existing measures; ( b ) discovers hidden potential correlations among various national indicators in WHO datasets; and ( c ) is more robust in discovering pathways of gene interactions from gene expression time series data.

4.1. Synthetic Data: Power Test on Potential Correlation

As our estimator (and the measure itself) involves a maximization, it is possible that we are sensitive to outliers and may capture spurious noise. Via a series of experiments we show that the hypercontractivity coefficient and our estimator are capturing the true potential correlation. As shown in Figure 3, we generate pairs of datasets—one where X and Y are independent and one where there is a potential correlation as per our scenario. We run experiment with eight types of functional associations, following the examples from [5,35,36]. For the correlated datasets, out of n samples { ( x i , y i ) } i = 1 n , α n rare but correlated samples are in X = [ 0 , 1 ] and ( 1 α ) n dominant but independent samples are in X [ 1 , 1.1 ] . The rare but correlated samples are generated as x i Unif [ 0 , 1 ] , y i f ( x i ) + N ( 0 , σ 2 ) for i [ 1 : α n ] . The dominant samples are generated as x i Unif [ 1 , 1.1 ] , y i f ( Unif [ 0 , 1 ] ) + N ( 0 , σ 2 ) for i [ α n + 1 , n ] .
Table 1 shows the hypercontractivity coefficient and the other correlation coefficients for correlated and independent datasets shown in Figure 3, along with the chosen value of α and σ 2 . Correlation estimates with the largest separation for each row is shown in bold. The hypercontractivity coefficient gives the largest separation between the correlated and the independent dataset for most functional types.
A formal statistical approach to test the robustness as well as accuracy is to run power tests: testing for the power of the estimator in binary hypothesis tests. To compute the power of each estimator, we compare the false negative rate at a fixed false positive rate of, say, 5%. We generate 500 independent datasets and 500 correlated datasets. We compute the correlation estimates on 500 independent samples, and take the top 5% as a threshold. We compute the correlation estimates on 500 correlated samples. Power is defined as the fraction of correlated datasets for which the correlation estimate is larger than the threshold.
We show empirically that for linear, quadratic, sine with period 1/2, and the step function, the hypercontractivity coefficient is more powerful as compared to other measures. For a given setting, a larger power means a larger successful detection rate for a fixed false alarm rate. Figure 4 shows the power of correlation estimators as a function of the additive noise level, σ 2 , for α = 0.05 and n = 320 . The hypercontractivity coefficient is more powerful than other correlation estimators for most functions. The power of all the estimators are very small for sine (period 1/8) and circle functions. This is not surprising given that it is very hard to discern the correlated and independent cases even visually, as shown in Figure 3.
Figure 5 plots the power of correlation estimators as a function of noise level for α = 0.1 and n = 320 . As we can see from these figures, hypercontractivity estimator is more powerful than other correlation estimators for most functions. For circle function, the gap between the power of hypercontractivity estimator and the powers of other estimators is significantly large.
On the other hand, hypercontractivity estimator is power deficient for the cubic function. This is because in estimating hypercontractivity coefficient, we estimate p ( y j | x i ) / p ( y j ) using the kernel density estimator (KDE), which gives a smooth estimate of p ( y j | x i ) / p ( y j ) , i.e., for x i and x j close to each other, estimated p ( y | x i ) and p ( y | x j ) are close to each other. Hence, for a correlated dataset for a cubic function, shown in Figure 6C, the estimated p ( y | x ) does not vary much for x. (Estimated p ( y | x ) for x [ 0.8 : 1 ] and p ( y | x ) for x [ 1 : 1.1 ] are close to each other). This results in a small hypercontracitivy, which in turn results in a low power in the hypothesis testing. To further analyze this effect, we considered the same dataset but with dominant independent samples appear on the left, as shown in Figure 6E,F, and computed the power of hypercontractivity estimator, shown in Figure 6D. Hypercontractivity estimator is much more powerful than the one for the original dataset. This is because the estimated p ( y | x ) for x [ 0.8 , 1 ] is very different from the estimated p ( y | x ) for x [ 0.1 , 0 ] , which results in a large estimate of hypercontractivity coefficient for the correlated dataset.
To investigate the dependency of power on α more closely, in Figure 7, we plot the power vs. α or n = 320 and σ 2 = 0.1 . Hypercontractivity estimator is more powerful than other estimators for most α , for all functions except for cubic function. For a sine with period 1/8, due to its high frequency, the powers of all the correlation estimators do not increase as α increases. Figure 8 plots the power vs. sample size n for α = 0.05 and σ 2 = 0.1 . For sine with period 1/2, hypercontractivity estimator is much more powerful than the other estimators for all sample sizes. We can also see that for sine with period 1/8, powers of all correlation estimators do not increase as sample size increases.

4.2. Real Data: Correlation between Indicators of WHO Datasets

We computed the hypercontractivity coefficient, MIC, and Pearson correlation of 1600 pairs of indicators from 202 samples (countries, non i.i.d.) in the World Health Organization (WHO) dataset [5]. Figure 9 illustrates that the hypercontractivity coefficient discovers hidden potential correlation (e.g., in Figure 9E–H, whereas other measures fail. Scatter plots of Pearson correlation vs. the hypercontractivity coefficient and MIC vs. the hypercontractivity coefficient for all pairs are presented in Figure 9A,D. The samples for pairs of indicators corresponding to B,C,E–J in Figure 9A,D are shown in Figure 9B,C,E–J, respectively. In Figure 9B, it is reasonable to assume that the number of bad teeth per child is uncorrelated with the democracy score. The hypercontractivity coefficient, MIC, and Pearson correlation are all small, as expected. In Figure 9C, the correlation between CO 2 emissions and energy use is clearly visible, and all three correlation estimates are close to one.
However, only the hypercontractivity coefficient discovers the hidden potential correlation in Figure 9E–H. In Figure 9E, the data is a mixture of two types of countries—one with small amount of aid received (less than $ 5 × 10 8 ), and the other with large amount of aid received (larger than $ 5 × 10 8 ). Dominantly many countries (104 out of 146) belong to the first type (small aid), and for those countries, the amount of aid received and the income growth are independent. For the remaining countries with larger aid received, although those are rare, there is a clear correlation between the amount of aid received and the income growth.
Similarly in Figure 9F, there are two types of countries—one with small arms exports (less than $ 2 × 10 8 ) and the other with large arms exports (larger than $ 2 × 10 8 ). Dominantly many countries (71 out of 82) belong to the first type, for which the amount of arms exports and the health expenditure are independent. For the remaining countries that belong to the second type, on the other hand, there is a visible correlation between the arms exports and the health expenditure. This is expected as for those countries that export arms the GDP is positively correlated with both arms exports and health expenditure, whereas for those do not have arms industry, these two will be independent.
In Figure 9G, for dominant number of countries, the number of male deaths from the colon and rectum cancer is small (for 145 out of 169 countries, it is smaller than 2000, while for the remaining countries, it is between 2000 and 50,000), and it is independent of the amount of health expenditure. On the other hand, for the remaining countries with larger number of male deaths from colon and rectum cancer, the two indicators are positively associated. This is expected as both indicators are positively correlated with the population. Only hypercontractivity discovers this hidden potential correlation. MIC and Pearson correlation are small.
In Figure 9H, for dominant number of countries, the number of broadband subscribers is very small and is independent of the private health expenditure; 155 out of 180 countries have broadband subscribers less than 10 6 . On the other hand, for the remaining countries, the number of broadband subscribers is positively correlated with the private health expenditure. This is as expected because both indicators are positively correlated with the population. Hypercontractivity is large for this dataset, discovering the hidden correlation, whereas all other correlations all small.
In Figure 9I, most countries do not have large hydroelectricity facilities, and for those countries, energy use and hydroelectricity consumption are independent (41 out of 53 countries have hydroelectricity ≤0.25). On the other hand, for the countries which have hydroelectrocity facilities, the amount of total energy use and the amount of hydroelectricity consumption are positively correlated. Hypercontractivity discovers this hidden potential correlation. Unlike in Figure 9G,H for which the fraction of correlated samples was only about 14 % , in Figure 9I, the fraction of correlated samples is about 23 % . Hence, Pearson correlation is larger compared to Pearson correlation values for Figure 9G,H.
In Figure 9J, there is one country (Luxembourg) with very large amounts of foreign direct investment net inflow and outflow. Due to this outlier, Pearson correlation is close to 1. Hypercontractivity is also close to 1, whereas MIC is small. To analyze the effect of the outlier in correlation measures, in the following, we compute the correlation measures for samples without an outlier.

4.2.1. How Hypercontractivity Changes as We Remove Outliers

Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, on the left, are shown samples from Figure 9E–J respectively. On the middle and on the right are shown all samples but one outlier and all samples but two outliers, respectively. By comparing the hypercontractivity coefficients for the three datasets for each pair of indicators, we can analyze the effect of outliers on hypercontractivity. For a comparison, on the top of each figure, we show the estimated hypercontractivity (HC), MIC, Pearson correlation (Cor), distance correlation (dCor), maximal correlation (mCor), and the hypercontractivity for reversed direction (HCR). In Figure 10 and Figure 11, we can see that hypercontractivity is more sensitive to an outlier than other correlation measures.
In Figure 12 (left), the two countries with the largest number of male deaths from the colon and rectum cancer are China and United States. As China is removed from the dataset, in (middle), hypercontractivity remains unchanged. As we also remove United States, in (right), hypercontractivity becomes small, 0.17. This value is still larger than the typical coefficient for two independent indicators (≈0.05), we can see that hypercontractivity is more sensitive to the outlier than other correlation measures.
In Figure 13, the two countries with the largest number of broadband subscribers are United States and China. When we remove United States from the samples, hypercontractivity becomes close to zero, which also shows hypercontractivity is sensitive to the outliers.
In Figure 14, hypercontractivity remains large even after we remove outliers. The two countries with the largest amount of hydroelectricity consumption are Norway and Iceland. Even after we remove Norway from the samples, as shown in (middle), hypercontractivity remains large. As we further remove one outlier (Iceland) from the samples, as shown in (right), hypercontractivity becomes 0.49.
In Figure 15 (middle), all samples but Luxembourg is shown. We can see that most countries have a very small absolute amount of foreign direct investment net outflows (For 126 out of 157 countries, it is between [ 2 , 2 ] ), and for those countries, the foreign direct investment net outflow is independent of foreign direct investment net inflows. For the remaining countries, there is a positive association between the outflow and the inflow. Hypercontractivity captures this hidden correlation better than other correlations; hypercontractivity is 0.47, whereas MIC and Pearson correlation are small. If we further remove the rightmost sample, as shown in (right), hypercontractivity becomes small.
Whether we should consider a sample in a rare type as a meaningful sample or as an outlier depends on the application. If we use hypercontractivity to discover a pair of measures for which one variable can be potentially correlated with the other, then we would expect to discover that an aid for a country has potential correlation in the income growth. Other measures will fail. It is possible that hypercontractivity might have larger false positive rate, and depending on the application, one might prefer to error on the side of having more positive cases to be screened by further experiments, surveys, or human judgements.

4.2.2. Hypercontractivity Detecting an Outlier

In Figure 16A and B, we show examples of pairs of indicators for which there is one outlier and the remaining samples are independent. In Figure 16C,D, we show the scatterplot of the same pairs of indicators as in Figure 16A,B, respectively, with one outlier removed. As shown in Figure 16A,B, hypercontractivity is close to 1, when there is an outlier. As shown in Figure 16B,D, hypercontractivity is close to 0, when the outlier is removed. This implies that one single outlier can make the hypercontractivity large. We can see similar patterns for other correlation measures, such as Pearson correlation, distance correlation, and maximal correlation from Figure 16, and MIC from Figure 16C,D. Nonetheless, these other correlation measures are not as sensitive to an outlier as hypercontractivity coefficient.
To further study how hypercontractivity estimator is affected by outliers, we ran simulations on synthetic data. We generated three sets of synthetic data shown in Figure 17 and computed hypercontractivity coefficients. In Figure 17 (left), an outlier is located far from the rest of samples, and the estimated hypercontractivity coefficient is 0.99. In Figure 17 (middle), an outlier is located close to the rest of samples, and the estimated hypercontractivity coefficient is 0.04. In Figure 17 (right), X and Y are potentially correlated, and the hypercontractivity estimate is 0.17. As can bee seen from this simulation and experimental results on WHO dataset, our hypercontractivity estimator is sensitive to outliers. If one wants to filter out the effect of outliers, one can combine methods for robust estimation, such as trimming and winsorizing [37,38,39], along with the hypercontractivity estimator. This is an interesting future research direction.

4.3. Gene Pathway Recovery From Single Cell Data

We replicate the genetic pathway detection experiment from [7], and show that hypercontractivity correctly discovers the genetic pathways from smaller number of samples. A genetic pathway is a series of genes interacting with each other as a chain. Consider the following setup where four genes whose expression values in a single cell are modeled by random processes X t , Y t , Z t and W t respectively. These 4 genes interact with each other following a pathway X t Y t Z t W t ; it is biologically known that X t causes Y t with a negligible delay, and later at time t , Y t causes Z t , and so on. Our goal is to recover this known gene pathway from sampled datapoints. For a sequence of time points { t i } i = 0 m , we observe n i i.i.d. samples { X t i ( j ) , Y t i ( j ) , Z t i ( j ) , W t i ( j ) } j = 1 n i generated from the random process P ( X t i , Y t i , Z t i , W t i ) . We use the real data obtained by the single-cell mass flow cytometry technique [7].
Given these samples from time series, the goal of [7] is to recover the direction of the interaction along the known pathway using correlation measures as follows, where they proposed a new measure called DREMI. The DREMI correlation measure is evaluated on each pairs on the pathway, τ ( X t i , Y t i ) , τ ( Y t i , Z t i ) and τ ( Z t i , W t i ) , at each time points t i . It is declared that a genetic pathway is correctly recovered if the peak of correlation follows the expected trend: arg max t i τ ( X t i , Y t i ) arg max t i τ ( Y t i , Z t i ) arg max t i τ ( Z t i , W t i ) . In [25], the same experiment has been done with τ evaluated by UMI and CMI estimators. In this paper, we evaluate τ using our proposed estimator of hypercontractivity.
The Figure 18 shows the scatter plots pCD3 ζ -pSLP76-pERK-pS6 chain at different time points after TCR activation. The data comes from CD4+ naïve T lymphocytes from B6 mice with CD3, CD28, and CD4 cross-linking. Each row represents a pair of data in the chain, and each column stands for a time point after TCR activation. Estimate of hypercontractivity is shown below the scatter plot for each pair of data and each time point and we highlight the time point where each pair of data is maximally correlated. We can see that the peak of the correlation of pCD3 ζ -pSLP76, pSLP76-pERK and pERK-pS6 appears at 0.5 min, 1 min and 2 min respectively, hence the pathway is correctly identified.
In Figure 19, the similar plots was shown for T-cells exposed with an antigen. Similarly, hypercontractivity is able to capture the trend.
We subsample the raw data from [7] to evaluate the ability to find the trend from smaller samples. Precisely, given a resampling rate γ ( 0 , 1 ] , we randomly select a subset of indices S i [ n i ] with card ( S i ) = γ n i , compute τ ( X t i , Y t i ) , τ ( Y t i , Z t i ) and τ ( Z t i , W t i ) from subsamples { X t i ( j ) , Y t i ( j ) , Z t i ( j ) , W t i ( j ) } j S i , and determine whether we can recover the trend successfully, i.e., whether arg max t i τ ( X t i , Y t i ) arg max t i τ ( Y t i , Z t i ) arg max t i τ ( Z t i , W t i ) . We repeat the experiment several times with independent subsamples and compute the probability of successfully recovering the trend. Figure 20 illustrates that when the entire dataset is available, all methods are able to recover the trend correctly. When only fewer samples are available, hypercontractivity improves upon other competing measures in recovering the hidden chronological order of interactions of the pathway. For completeness, we run datasets for both regular T-cells (shown in left figure) and T-cells exposed with an antigen (shown right figure), for which we expect distinct biological trends. Hypercontractivity method can capture the trend for both datasets correctly and sample-efficiently.

5. Proofs

In this section, we provide proofs for our main results and technical lemmas.

5.1. Proof of Proposition 1

Let S F ( X , Y ) = F ( s ( X ; Y ) , s ( Y ; X ) ) for F satisfying conditions in Proposition 1. We show that S F ( X , Y ) satisfies all Rényi’s axioms, i.e., Axioms 1–5 and 6’ and 7’.
1.
S F ( X , Y ) is defined for any pair of non-constant random variables X , Y because s ( X ; Y ) [ 0 , 1 ] and s ( Y ; X ) [ 0 , 1 ] are defined for any random variables X, Y by Theorem 1.
2.
S F ( X , Y ) [ 0 , 1 ] because the output of a function F is in [ 0 , 1 ] by the condition on F.
3.
If X and Y are statistically independent, s ( X ; Y ) = s ( Y ; X ) = 0 . By the condition on F, it follows that S F ( X , Y ) = 0 . If S F ( X , Y ) = 0 , by the condition on F, s ( X ; Y ) s ( Y ; X ) = 0 , which implies that X and Y are statistically independent.
4.
S F ( f ( X ) , g ( Y ) ) = S F ( X , Y ) for any bijective Borel-measurable functions f , g because s ( f ( X ) ; g ( Y ) ) = s ( X ; Y ) and s ( g ( Y ) ; f ( X ) ) = s ( Y ; X ) by Theorem 1.
5.
For ( X , Y ) N ( μ , Σ ) with Pearson correlation ρ , s ( X ; Y ) = s ( Y ; X ) = ρ 2 . Hence, S F ( X , Y ) = F ( | ρ | , | ρ | ) = | ρ | .
6’
If Y = f ( X ) for a non-constant function f, it follows that I ( f ( X ) ; f ( X ) ) = I ( f ( X ) ; X ) because if f ( X ) is discrete, I ( f ( X ) ; f ( X ) ) = I ( f ( X ) ; X ) = H ( f ( X ) ) and otherwise, I ( f ( X ) ; f ( X ) ) = I ( f ( X ) ; X ) = . Hence
s ( X ; f ( X ) ) = sup U X f ( X ) I ( U ; f ( X ) ) / I ( U ; X ) = I ( f ( X ) ; f ( X ) ) / I ( f ( X ) ; X ) = 1 .
Similarly, s ( f ( X ) ; X ) = sup U f ( X ) X I ( U ; X ) / I ( U ; f ( X ) ) = 1 . Hence, S F ( X ; f ( X ) ) = F ( 1 , 1 ) = 1 . Likewise, we can show that S F ( X ; Y ) = 1 if X = g ( Y ) .
7’
S F ( X , Y ) = S F ( Y , X ) because F ( x , y ) = F ( y , x ) .

5.2. Proof of Theorem 1

We show that s ( X ; Y ) satisfies Axioms 1–6 in Section 2.
  • For any non-constant random variable X, U s.t. I ( U ; X ) > 0 . Hence, s ( X ; Y ) is defined for any pair of non-constant random variables X and Y.
  • Since mutual information is non-negative, s ( X ; Y ) 0 . By data processing inequality, for any U X Y , I ( U ; X ) I ( U ; Y ) . Hence, s ( X ; Y ) 1 .
  • If X and Y are independent, for any U, I ( U ; Y ) I ( X ; Y ) = 0 . Hence, s ( X ; Y ) = 0 . If X and Y are dependent, I ( X ; Y ) > 0 , which implies that s ( X ; Y ) I ( X ; Y ) / H ( X ) > 0 .
  • For any bijective functions f , g ,
    I ( U ; g ( Y ) ) = I ( U ; g ( Y ) , Y ) = I ( U ; Y ) + I ( U ; g ( Y ) | Y ) = I ( U ; Y ) .
    Similarly, I ( U ; f ( X ) ) = I ( U ; X ) . Hence,
    s ( f ( X ) ; g ( Y ) ) = sup U : U f ( X ) g ( Y ) , I ( U ; f ( X ) ) > 0 I ( U ; g ( Y ) ) I ( U ; f ( X ) ) = sup U : U X f ( X ) g ( Y ) Y , I ( U ; X ) > 0 I ( U ; Y ) I ( U ; X ) = s ( X ; Y ) .
  • By Theorem 3.1 in [33], for ( X , Y ) jointly Gaussian with correlation coefficient ρ ,
    min U : U X Y I ( U ; X ) β I ( U ; Y ) = 0
    for β 1 / ρ 2 . Equivalently,
    max U : U X Y I ( U ; Y ) ρ 2 I ( U ; X ) = 0 ,
    which implies that s ( X ; Y ) ρ 2 . To show that s ( X ; Y ) ρ 2 , let U Z = X + Z for Z ( 0 , σ 1 2 ) . Consider
    s ( X ; Y ) lim σ 1 2 I ( U Z ; Y ) I ( U Z ; X ) = lim σ 1 2 log ( σ X 2 + σ 1 2 ) σ Y 2 ( σ X 2 + σ 1 2 ) σ Y 2 ρ 2 σ X 2 σ Y 2 log 1 + σ X 2 σ 1 2 = lim σ 1 2 ρ 2 σ X 2 σ Y 2 / ( σ X 2 + σ 1 2 ) σ Y 2 ρ 2 σ X 2 σ Y 2 σ X 2 / σ 1 2 = ρ 2 .
    Hence, s ( X ; Y ) = ρ 2 . An alternative proof is provided in [24].
  • To prove that s ( X ; Y ) satisfies Axiom 6, we first show the following lemma.
Lemma 1.
Consider a pair of random variables ( X , Y ) X × Y . The hypercontractivity s ( X ; Y ) is lower bounded by
s ( X ; Y ) I ( U ; Y | X X r ) H ( α ) / α + I ( U ; X | X X r )
for any X r such that X r X for P { X X r } = : α > 0 .
Proof. 
Let
U s = U p ( u | x ) if X X r , otherwise .
Let S = I { U s = } = I { X X r } . Note that S U s X Y holds, and that S is a deterministic function of X. Hence,
I ( U s ; X ) = I ( U s , S ; X ) = I ( S ; X ) + I ( U s ; X | S ) = H ( α ) + α I ( U ; X | X X r ) .
Consider
I ( U s ; Y ) = I ( U s , S ; Y ) = I ( S ; Y ) + I ( U s ; Y | S ) α I ( U ; Y | X X r ) .
The proof is completed by combining (14) and (15). ☐
Assume that Y = f ( X ) for X X r . Considering U = f ( X ) in (13) in Lemma 1, we obtain the following lower bound:
s ( X ; Y ) I ( f ( X ) ; f ( X ) | X X r ) H ( α ) / α + I ( f ( X ) ; X | X X r ) .
For any continuous random variable X and a non-constant continuous function f, I ( f ( X ) ; f ( X ) | X X r ) = I ( f ( X ) ; X | X X r ) = , which implies that s ( X ; Y ) = 1 .

5.3. Proof of Theorem 2

We first prove that mCor ( X , Y ) = α mCor ( X r , Y ) in (2). Let S = I { X X r } be the indicator for whether X X r or not. Consider
mCor ( X ; Y ) = max f , g : E [ f ( X ) ] = E [ g ( Y ) ] = 0 , E [ f 2 ( X ) ] 1 , E [ g 2 ( Y ) ] 1 E [ f ( X ) g ( Y ) ] = max f , g : E [ f ( X ) ] = E [ g ( Y ) ] = 0 , E [ f 2 ( X ) ] 1 , E [ g 2 ( Y ) ] 1 E S [ E [ f ( X ) g ( Y ) | S ] ] = max f , g : E [ f ( X ) ] = E [ g ( Y ) ] = 0 , E [ f 2 ( X ) ] 1 , E [ g 2 ( Y ) ] 1 α E [ f ( X ) g ( Y ) | X X r ] + α ¯ E [ f ( X ) g ( Y ) | X X d ] = max f , g : E [ f ( X ) ] = E [ g ( Y ) ] = 0 , E [ f 2 ( X ) ] 1 , E [ g 2 ( Y ) ] 1 α E [ f ( X ) g ( Y ) | X X r ] + α ¯ E [ f ( X ) | X X d ] E [ g ( Y ) | X X d ] = ( a ) α max f , g : E [ f ( X ) ] = E [ g ( Y ) ] = 0 , E [ f 2 ( X ) ] 1 , E [ g 2 ( Y ) ] 1 E [ f ( X ) g ( Y ) | X X r ] = ( b ) α mCor ( X r , Y ) .
Step ( a ) holds since E [ g ( Y ) | X X r ] = E [ g ( Y ) | X X d ] from the assumption that marginal distributions are equal, and that E [ g ( Y ) ] = α E [ g ( Y ) | X X r ] + α ¯ E [ g ( Y ) | X X d ] . To show step ( b ) , let c = E [ f ( X ) | X X d ] and note that
α E [ f ( X ) | X X r ] = α ¯ c , α E [ f 2 ( X ) | X X r ] = E [ f 2 ( X ) ] α ¯ E [ f 2 ( X ) | X X d ] 1 α ¯ c 2 , E [ g ( Y ) | X X r ] = 0 .
Hence,
max f , g : E [ f ( X ) ] = E [ g ( Y ) ] = 0 , E [ f 2 ( X ) ] 1 , E [ g 2 ( Y ) ] 1 E [ f ( X ) g ( Y ) | X X r ] = max f r , g : E [ f r ( X ) ] = α ¯ c / α , E [ g ( Y ) ] = 0 , E [ f r 2 ( X ) ] ( 1 α ¯ c 2 ) / α , E [ g 2 ( Y ) ] 1 E [ f r ( X ) g ( Y ) ] = max f r c , g : E [ f r c ( X ) ] = 0 , E [ g ( Y ) ] = 0 , E [ f r c 2 ( X ) ] ( α α ¯ c 2 ) / α 2 , E [ g 2 ( Y ) ] 1 E [ ( f r c ( X ) g ( Y ) ] = max f r c , g : E [ f r c ( X ) ] = 0 , E [ g ( Y ) ] = 0 , E [ f r c 2 ( X ) ] 1 / α , E [ g 2 ( Y ) ] 1 E [ f r c ( X ) g ( Y ) ] = max f r c a , g : E [ f r c a ( X ) ] = 0 , E [ g ( Y ) ] = 0 , E [ f r c a 2 ( X ) ] 1 , E [ g 2 ( Y ) ] 1 1 α E [ f r c a ( X ) g ( Y ) ] = mCor ( X r , Y ) α ,
where f r ( X ) , f r c ( X ) = f r ( X ) + α ¯ c / α , and f r c a ( X ) = α f r c ( X ) are functions defined only for X X r .
We next show dCor ( X , Y ) = α dCor ( X r , Y ) in (3). Let
h X ( s ) = E [ e i s X ] , h Y ( t ) = E [ e i t Y ] , h X Y ( s , t ) = E [ e i ( s X + t Y ) ] .
Note that
h X Y ( s , t ) = E [ e i ( s X + t Y ) ] = α E [ e i ( s X + t Y ) | X X r ] + α ¯ E [ e i s X | X X d ] E [ e i t Y | X X d ] = α E [ e i ( s X + t Y ) | X X r ] + α ¯ E [ e i s X | X X d ] E [ e i t Y ] ,
and
h X ( s ) = E [ e i s X ] = α E [ e i s X | X X r ] + α ¯ E [ e i s X | X X d ] .
By combining (16) and (17),
h X Y ( s , t ) h X ( s ) h Y ( t ) = α E [ e i ( s X + t Y ) | X X r ] α E [ e i s X | X X r ] E [ e i t Y ] = α E [ e i ( s X + t Y ) | X X r ] α E [ e i s X | X X r ] E [ e i t Y | X X r ] = α dCor ( X r , Y ) .
Finally, we show that MIC ( X , Y ) α MIC ( X r , Y ) in (4).
Let X Q ( X ) X Q ( X ) and Y Q ( Y ) Y Q ( Y ) denote a quantization of X and Y, respectively. Consider
MIC ( X , Y ) = max X Q ( X ) , Y Q ( Y ) I ( X Q ; Y Q ) log min { | X Q | , | Y Q | } max X Q ( X ) , Y Q ( Y ) I ( I X X r , X Q ; Y Q ) log min { | X Q | , | Y Q | } = ( a ) α max X Q ( X ) , Y Q ( Y ) I ( X Q ; Y Q | X X r ) log min { | X Q | , | Y Q | } α max X Q ( X r ) , Y Q ( Y ) I ( X Q ; Y Q | X X r ) log min { | X Q ( X r ) | , | Y Q | } = α MIC ( X r , Y ) ,
where step ( a ) holds because I X X r Y implies I X X r Y Q and X Y in X X d implies X Q Y Q in X X d .

5.4. Proof of Proposition 2

The inverse hypercontractivity s ( Y ; X ) is defined as
s ( Y ; X ) = sup U Y X I ( U ; X ) I ( U ; Y ) .
Let I r = I { X X r } . Since the marginal distribution of Y given { X X r } and the one given { X X r } are equivalent, Y and I r are independent, i.e., I ( Y ; I r ) = 0 . For any U such that Markov chain U Y X holds, the Markov chain U Y X I r holds. Hence, I ( U ; I r ) = 0 . Hence, for any U Y X , consider
I ( U ; X ) = I ( U ; X , I r ) = I ( U ; X | I r ) = ( 1 α ) I ( U ; X | I r = 0 ) + α I ( U ; X | I r = 1 ) = ( a ) α I ( U ; X | I r = 1 )
Step ( a ) holds because Y X given I r = 0 . Consider
I ( U ; Y ) = ( a ) I ( U ; Y , I r ) = I ( U ; Y | I r ) + I ( U ; I r ) = ( b ) I ( U ; Y | I r ) = α I ( U ; Y | I r = 1 ) + ( 1 α ) I ( U ; Y | I r = 0 ) = ( c ) I ( U ; Y | I r = 1 ) ,
where step ( a ) folllows since U Y I r . Step ( b ) follows from I ( U ; I r ) = 0 . Step ( c ) holds since H ( U | I r = 1 ) = H ( U | I r = 0 ) and U Y I r . Therefore, for any U Y X , it follows that
s ( Y ; X ) = sup U Y X α I ( U ; X | I r = 1 ) I ( U ; Y | I r = 1 ) = α s ( Y ; X r ) .

5.5. Noisy Rare Correlation in Example 1

Let
U = X + Z , Z N ( 0 , σ 1 2 ) .
Consider
sup U : U X Y , I ( U ; X ) > 0 I ( U ; Y ) I ( U ; X ) sup σ 1 2 0 log ( 1 + σ 1 2 ) ( 1 + σ 1 2 ) ρ 2 H ( α ) / α + log ( 1 + 1 / σ 1 2 ) .
The inequality (7) follows by choosing σ 1 2 = ( 1 ρ 2 ) / ρ 2 .

5.6. Noisy Discrete Rare Correlation in Example 3

The inequality (8) follows by choosing r ( x ) = I { X = 1 } in (10). To show (9), we show that
mCor ( X r , Y ) = 1 k k 1 ϵ .
The rest follows because mCor ( X ; Y ) = α mCor ( X r , Y ) by Proposition 2. To show (18), we use the fact that maximal correlation is the second eigenvalue of Q = P X 1 / 2 P X Y P Y 1 / 2 (see [40] for a detailed proof). We can easily show that
Q = 1 k k 1 ϵ I + ϵ k 1 11 T .
First singular vector of Q is P X 1 / 2 = 1 / k . Second singular vector u 2 is orthogonal to 1 / k . The Equation (18) follows because mCor ( X r ; Y ) = u 2 T Q u 2 = u 2 T ( 1 k ϵ / ( k 1 ) ) u 2 .

5.7. Proof of Proposition 4

We first prove the second part of proposition: the hypercontractivity coefficient s ( X ; Y ) satisfies Axioms 1–4, 5’, and 6. It follows immediately from Theorem 1 that s ( X ; Y ) satisfies Axioms 1–4 and 6 because in the proof of Theorem 1—1–4 and 6, the same argument holds for random vectors X and Y. We can show that that s ( X ; Y ) satisfies Axiom 5’ using results from [33]. In [33], it is shown that that as we increase β starting from zero, min { I ( U ; X ) β I ( U ; Y ) } departs form zero at β = 1 / Σ X 1 / 2 Σ X Y Σ Y 1 / 2 2 for jointly Gaussian random vectors X and Y. This result implies that s ( X ; Y ) = Σ X 1 / 2 Σ X Y Σ Y 1 / 2 .
To show that maximal correlation of two random vectors satisfies Axioms 1–4, 6’, and 7’, we can follow the same arguments for showing that maximal correlation for two random variables satisfies Axioms 1–4, 6’, and 7’ by [4]. To show that maximal correlation satisfies Axiom 5’, note that maximal correlation is upper bounded by hypercontractivity as shown in Remark 1 in Section 2.3: hence mCor ( X ; Y ) Σ X 1 / 2 Σ X Y Σ Y 1 / 2 for a jointly Gaussian X , Y . Equality holds because mCor ( X , Y ) is lower bounded by its canonical correlation, which is Σ X 1 / 2 Σ X Y Σ Y 1 / 2 for jointly Gaussian random vectors ( X , Y ) [33].

5.8. Proof of Theorem 3

We begin with the following assumptions:
(a)
There exist finite constants C 1 < C 1 < C 2 < C 2 such that the ratio of the optimal r x and the true p x satisfies r x ( x ) / p x ( x ) [ C 1 , C 2 ] for every x X .
(b)
There exist finite constants C 0 > C 0 > 0 such that the KL divergence D ( r x | | p x ) > C 0 .
With a little abuse of notations, we define s ( r x ) = D ( r y | | p y ) / D ( r x | | p x ) and s ^ ( w ) = w T A log ( A T w ) / w T log w . Therefore, s ( X ; Y ) = max r x R s ( r x ) and s ^ Δ ( X ; Y ) = max w T Δ s ^ ( w ) . Here R is the probability simplex over all r x . We want to bound the error | s ^ Δ ( X ; Y ) s ( X ; Y ) | . First, consider the quantity:
s Δ ( X ; Y ) max r x T Δ ( R ) s ( r x ) ,
where the constraint set T Δ ( R ) is defined as:
T Δ ( R ) = { r x R | X | : [ ( r x ( x ) / p x ( x ) ) ] T Δ and x X r x ( x ) [ 1 | X | Δ , 1 + | X | Δ ] }
Now we rewrite the error term as
| s ^ Δ ( X ; Y ) s ( X ; Y ) | | s Δ ( X ; Y ) s ( X ; Y ) | + | s ^ Δ ( X ; Y ) s Δ ( X ; Y ) | .
The first error comes from quantization. Let r be the maximizer of s ( X ; Y ) . By assumption, r ( x ) / p x ( x ) [ C 1 , C 2 ] , for all x. Since T Δ ( R ) is a quantization of the simplex R, so there exists an r 0 T Δ ( R ) such that | r 0 ( x ) r ( x ) | < Δ for all x X . Now we will bound the difference between s ( r 0 ) and s ( r ) by the following lemma:
Lemma 2.
If r ( x ) / p ( x ) [ C 1 , C 2 ] and r ( x ) / p ( x ) [ C 1 , C 2 ] for all x X , and D ( r x | | p x ) > C 0 and D ( r x | | p x ) > C 0 , then
| s ( r ) s ( r ) | L max x X | r ( x ) r ( x ) | ,
for some positive constant L.
Next we have:
s ( X ; Y ) = s ( r ) s ( r 0 ) + L max x X | r 0 ( x ) r ( x ) | max r T Δ ( R ) s ( r ) + L Δ = s Δ ( X ; Y ) + L Δ .
Similarly, let r be the maximizer of s Δ ( X ; Y ) , we can also find a r 1 R such that | r 1 ( x ) r ( x ) | < Δ for all x X . Using Lemma 2 again, we will obtain s Δ ( X ; Y ) s ( X ; Y ) + L Δ . Therefore, the quantization error is bounded by O ( Δ ) with probability 1.
Now consider the second term. Upper bound on the second term relies on the convergence of estimation of s. We claim that for given r x , the estimator is convergent in probability, i.e.,
Lemma 3.
lim N P | s ^ ( w r ) s ( r x ) | > ε = 0 .
Here w r ( x ) = r x ( x ) / p x ( x ) . Since the set T Δ ( R ) is finite, by union bound, we have:
lim N P r T Δ ( R ) , | s ^ ( w r ) s ( r x ) | ε 1 | T Δ ( R ) | lim N P | s ^ ( w r ) s ( r x ) | ε = 1 .
Also, by the strong law of large numbers, we have that
lim N P x X , | p x ( x ) n x n | < Δ C 2 | X | = 1 .
where n x = card { i [ n ] : x i = x } . We claim that if the events inside the probability in (25) and (26) happen simultaneously, then | s ^ Δ ( X ; Y ) s Δ ( X ; Y ) | < ε + O ( Δ ) , which implies the desired claim.
Let w = arg max w T Δ s ^ ( w ) . Define r 2 ( x ) = w ( x ) p x ( x ) . Since [ r 2 ( x ) / p x ( x ) ] T Δ for all x and
| x X r 2 ( x ) 1 | = | x X w x ( p x ( x ) n x n ) | + Δ | X | 2 | X | Δ 2 + C 2 max x X | p x ( x ) n x n | ( | X | / 2 + 1 ) Δ .
Therefore, r 2 T Δ ( R ) , so
s ^ Δ ( X ; Y ) = s ^ ( w ) s ( r 2 ) + ε s Δ ( X ; Y ) + ε .
On the other hand, consider r = arg max r x T Δ ( R ) s ( r x ) again, and define w 0 ( x ) = r ( x ) / p x ( x ) . We know that w 0 T Δ | X | but not necessarily i = 1 n w 0 ( x i ) = n . However, we claim that the sum is closed to n as follows:
| i = 1 n w 0 ( x i ) n | = | x X n x r ( x ) p x ( x ) n | n max x X r ( x ) p x ( x ) | n x n p x ( x ) | n C 2 Δ C 2 | X | < n Δ
so we can find a w 1 T Δ ( R ) such that | w 1 ( x ) w 0 ( x ) | Δ for all x. Let r 4 ( x ) = w 1 ( x ) p x ( x ) , similar as (27), we know that r 4 T Δ ( R ) . Moreover, | r 4 ( x ) r ( x ) | p x ( x ) | w 1 ( x ) w 0 ( x ) | Δ for all x. Then we have
s Δ ( X ; Y ) = s ( r ) s ( r 4 ) + L max x X | r ( x ) r 4 ( x ) | s ^ ( w 1 ) + ε + L Δ = s ^ Δ ( X ; Y ) + ε + L Δ .
We conclude that | s ^ Δ ( X ; Y ) s Δ ( X ; Y ) | < ε + O ( Δ ) ; thus our proof is complete.

5.9. Proof of Lemma 2

We will show that for any x X , we have | s ( r x ) / r x ( x ) | L / | X | for some L. Therefore,
| s ( r ) s ( r ) | x X | s ( r ) r x ( x ) | | r x ( x ) r x ( x ) | L max x X | r x ( x ) r x ( x ) | .
The gradient can be written as
s ( r ) r x ( x ) = r x ( x ) D ( r y | | p y ) D ( r x | | p x ) = 1 D 2 ( r x | | p x ) D ( r y | | p y ) r x ( x ) D ( r x | | p x ) D ( r x | | p x ) r x ( x ) D ( r y | | p y ) .
Since
D ( r x | | p x ) r x ( x ) = log r x ( x ) p x ( x ) + 1 max { | log C 1 | , | log C 2 | } + 1 D ( r y | | p y ) r x ( x ) = r y ( y ) r x ( x ) D ( r y | | p y ) r y ( y ) d y = p y | x ( y | x ) ( log r y ( y ) p y ( y ) + 1 ) d y max { | log C 1 | , | log C 2 | } + 1
Therefore, we have
| s ( r ) r x ( x ) | ( max { | log C 1 | , | log C 2 | } + 1 ) D ( p x | | r x ) + D ( r y | | p y ) D 2 ( r x | | p x ) 2 ( max { | log C 1 | , | log C 2 | } + 1 ) D ( r x | | p x ) 2 ( max { | log C 1 | , | log C 2 | } + 1 ) C 0
Since C 0 , C 1 , C 2 are constants and | X | is finite, our proof is complete by letting L = 2 | X | ( max { | log C 1 | , | log C 2 | } + 1 ) / C 0 .

5.10. Proof of Lemma 3

Note that s ^ ( w r ) = w T A log ( A T w ) / w T log w . Define D ^ ( r y | | p y ) = w T A log ( A T w ) and D ^ ( r x | | p x ) = w T log w . We will prove that both D ^ ( r y | | p y ) converges to D ( r y | | p y ) and D ^ ( r x | | p x ) converges to D ( r x | | p x ) in probability. Since D ( r x | | p x ) > 0 and D ^ ( r x | | p x ) > 0 with probability 1, we obtain that s ^ ( w r ) converges to D ( r y | | p y ) / D ( r x | | p x ) = s ( r x ) in probability.
The convergence D ^ ( r x | | p x ) comes from law of large number. Since D ^ ( r x | | p x ) = 1 n i = 1 n r x ( X i ) p x ( X i ) log r x ( X i ) p x ( X i ) and D ( r x | | p x ) = E X p x r x ( X ) p x ( X ) log r x ( X ) p x ( X ) , the weak law of large number shows the convergence in probability.
For the convergence of D ^ ( r y | | p y ) . Consider the vector v = A T w , we have
v j = 1 n i = 1 n p x y ( X i , Y j ) p x ( X i ) p y ( Y j ) w i = 1 n i = 1 n p y | x ( Y j | X i ) p y ( Y j ) r x ( X i ) p x ( X i ) .
On the other hand, for fixed Y j = y , we have
r y ( y ) p y ( y ) = E X p x p y | x ( y | X ) r x ( X ) p x ( X ) p y ( y ) = E X p x p y | x ( y | X ) p y ( y ) r x ( X ) p x ( X ) .
Therefore, by law of large number, we conclude that v j converges to r y ( Y j ) p y ( Y j ) in probability. Hence, D ^ ( r y | | p y ) = 1 n j = 1 n v j log v j converges to 1 n j = 1 n r y ( Y j ) p y ( Y j ) log r y ( Y j ) p y ( Y j ) in probability. Furthermore, 1 n j = 1 n r y ( Y j ) p y ( Y j ) log r y ( Y j ) p y ( Y j ) converges to D ( r y | | p y ) = E Y p y r y ( Y ) p y ( Y ) log r y ( Y ) p y ( Y ) in probability, by law of large number again. Therefore, we conclude that D ^ ( r y | | p y ) converges to D ( r y | | p y ) in probability.

Acknowledgments

This work was partially supported by NSF grants CNS-1527754, CNS-1718270, CCF-1553452, CCF-1617745, CCF-1651236, CCF-1705007, and GOOGLE Faculty Research Award.

Author Contributions

All authors contributed equally to this work. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pearson, K. Note on Regression and Inheritance in the Case of Two Parents. Proc. R. Soc. Lond. 1895, 58, 240–242. [Google Scholar] [CrossRef]
  2. Hirschfeld, H. A Connection between Correlation and Contingency. Math. Proc. Camb. Philos. Soc. 1935, 31, 520–524. [Google Scholar] [CrossRef]
  3. Gebelein, H. Das statistische Problem der Korrelation als Variations-und Eigenwertproblem und sein Zusammenhang mit der Ausgleichsrechnung. J. Appl. Math. Mech. 1941, 21, 364–379. [Google Scholar] [CrossRef]
  4. Rényi, A. On measures of dependence. Acta Math. Hung. 1959, 10, 441–451. [Google Scholar] [CrossRef]
  5. Reshef, D.N.; Reshef, Y.A.; Finucane, H.K.; Grossman, S.R.; McVean, G.; Turnbaugh, P.J.; Lander, E.S.; Mitzenmacher, M.; Sabeti, P.C. Detecting Novel Associations in Large Data Sets. Science 2011, 334, 1518–1524. [Google Scholar] [CrossRef] [PubMed]
  6. Székely, G.J.; Rizzo, M.L.; Bakirov, N.K. Measuring and testing dependence by correlation of distances. Ann. Stat. 2007, 35, 2769–2794. [Google Scholar] [CrossRef]
  7. Krishnaswamy, S.; Spitzer, M.H.; Mingueneau, M.; Bendall, S.C.; Litvin, O.; Stone, E.; Pe’er, D.; Nolan, G.P. Conditional density-based analysis of T cell signaling in single-cell data. Science 2014, 346, 1250689. [Google Scholar] [CrossRef] [PubMed]
  8. Tishby, N.; Pereira, F.C.; Bialek, W. The Information Bottleneck Method. In Proceedings of the 37th Annual Allerton Conference on Communication, Control, and Computing, Champaign, IL, USA, 22–24 September 1999; pp. 368–377. [Google Scholar]
  9. Dhillon, I.S.; Mallela, S.; Kumar, R. A Divisive Information-Theoretic Feature Clustering Algorithm for Text Classification. J. Mach. Learn. Res. 2003, 3, 1265–1287. [Google Scholar]
  10. Bekkerman, R.; El-Yaniv, R.; Tishby, N.; Winter, Y. Distributional Word Clusters vs. Words for Text Categorization. J. Mach. Learn. Res. 2003, 3, 1183–1208. [Google Scholar]
  11. Anantharam, V.; Gohari, A.A.; Kamath, S.; Nair, C. On Maximal Correlation, Hypercontractivity, and the Data Processing Inequality studied by Erkip and Cover. arXiv, 2013; arXiv:1304.6133. [Google Scholar]
  12. Davies, E.B.; Gross, L.; Simone, B. Hypercontractivity: A Bibliographic Review. In Ideas and Methods in Quantum and Statistical Physics (Oslo, 1988); Cambridge University Press: Cambridge, UK, 1992; pp. 370–389. [Google Scholar]
  13. Nelson, E. Construction of quantum fields from Markoff fields. J. Funct. Anal. 1973, 12, 97–112. [Google Scholar] [CrossRef]
  14. Kahn, J.; Kalai, G.; Linial, N. The Influence of Variables on Boolean Functions. In Proceedings of the IEEE Computer Society SFCS ’88 29th Annual Symposium on Foundations of Computer Science, White Plains, NY, USA, 24–26 October 1988; pp. 68–80. [Google Scholar]
  15. O’Donnell, R. Analysis of Boolean Functions; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  16. Bonami, A. Étude des coefficients de Fourier des fonctions de Lp(G). Annales de l’institut Fourier 1970, 20, 335–402. (In French) [Google Scholar] [CrossRef]
  17. Beckner, W. Inequalities in Fourier analysis. Ann. Math. 1975, 102, 159–182. [Google Scholar] [CrossRef]
  18. Gross, L. Hypercontractivity and logarithmic Sobolev inequalities for the Clifford-Dirichlet form. Duke Math. J. 1975, 42, 383–396. [Google Scholar] [CrossRef]
  19. Ahlswede, R.; Gács, P. Spreading of sets in product spaces and hypercontraction of the Markov operator. Ann. Probab. 1976, 4, 925–939. [Google Scholar] [CrossRef]
  20. Mossel, E.; Oleszkiewicz, K.; Sen, A. On reverse hypercontractivity. Geom. Funct. Anal. 2013, 23, 1062–1097. [Google Scholar] [CrossRef]
  21. Nair, C.; (Chinese University of Hong Kong, Hong Kong, China); Kamath, S.; (PDT Partners, Princeton, NJ, USA). Personal communication, 2016.
  22. Alemi, A.A.; Fischer, I.; Dillion, J.V.; Murphy, K. Deep variational information bottleneck. arXiv, 2017; arXiv:1612.0041. [Google Scholar]
  23. Achille, A.; Soatto, S. Information Dropout: Learning Optimal Representations Through Noisy Computation. arXiv, 2016; arXiv:1611.01353. [Google Scholar]
  24. Nair, C. An extremal inequality related to hypercontractivity of Gaussian random variables. In Proceedings of the Information Theory and Applications Workshop, San Diego, CA, USA, 9–14 February 2014. [Google Scholar]
  25. Gao, W.; Kannan, S.; Oh, S.; Viswanath, P. Conditional Dependence via Shannon Capacity: Axioms, Estimators and Applications. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 2780–2789. [Google Scholar]
  26. Ngiam, J.; Khosla, A.; Kim, M.; Nam, J.; Lee, H.; Ng, A.Y. Multimodal deep learning. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), Washington, DC, USA, 28 June–2 July 2011; pp. 689–696. [Google Scholar]
  27. Srivastava, N.; Salakhutdinov, R.R. Multimodal learning with deep boltzmann machines. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 2222–2230. [Google Scholar]
  28. Andrew, G.; Arora, R.; Bilmes, J.; Livescu, K. Deep canonical correlation analysis. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 1247–1255. [Google Scholar]
  29. Makur, A.; Zheng, L. Linear Bounds between Contraction Coefficients for f-Divergences. arXiv, 2017; arXiv:1510.01844. [Google Scholar]
  30. Witsenhausen, H.S. On Sequences of Pairs of Dependent Random Variables. SIAM J. Appl. Math. 1975, 28, 100–113. [Google Scholar] [CrossRef]
  31. Bell, C. Mutual information and maximal correlation as measures of dependence. Ann. Math. Stat. 1962, 33, 587–595. [Google Scholar] [CrossRef]
  32. Nair, C. Equivalent Formulations of Hypercontractivity Using Information Measures. In Proceedings of the 2014 International Zurich Seminar on Communications, Zurich, Switzerland, 26–28 February 2014. [Google Scholar]
  33. Chechik, G.; Globerson, A.; Tishby, N.; Weiss, Y. Information Bottleneck for Gaussian Variables. J. Mach. Learn. Res. 2005, 6, 165–188. [Google Scholar]
  34. Michaeli, T.; Wang, W.; Livescu, K. Nonparametric Canonical Correlation Analysis. In Proceedings of the ICML’16 33rd International Conference on International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; Volume 48, pp. 1967–1976. [Google Scholar]
  35. Simon, N.; Tibshirani, R. Comment on “Detecting Novel Associations In Large Data Sets” by Reshef Et Al, Science 16 December 2011. arXiv, 2014; arXiv:stat.ME/1401.7645. [Google Scholar]
  36. Gorfine, M.; Heller, R.; Heller, Y. Comment on Detecting Novel Associations in Large Data Sets. Available online: http://emotion.technion.ac.il/~gorfinm/filesscience6.pdf (accessed on 30 October 2017).
  37. Hastings, C.; Mosteller, F.; Tukey, J.W.; Winsor, C.P. Low Moments for Small Samples: A Comparative Study of Order Statistics. Ann. Math. Stat. 1947, 18, 413–426. [Google Scholar] [CrossRef]
  38. McBean, E.A.; Rovers, F. Statistical Procedures for Analysis of Environmental Monitoring Data and Assessment; Prentice-Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
  39. Rustum, R.; Adeloye, A.J. Replacing outliers and missing values from activated sludge data using Kohonen self-organizing map. J. Environ. Eng. 2007, 133, 909–916. [Google Scholar] [CrossRef]
  40. Kumar, G. Binary Rényi Correlation: A Simpler Proof of Witsenhausen’s Result and a Tight Lower Bound. Available online: http://www.gowthamiitm.com/research/Witsenhausen_simpleproof.pdf (accessed on 30 October 2017).
Figure 1. A measure of potential correlation should capture the rare correlation in X [ 0 , 1 ] in these examples which satisfy Axiom 6 for a linear and a quadratic function, respectively.
Figure 1. A measure of potential correlation should capture the rare correlation in X [ 0 , 1 ] in these examples which satisfy Axiom 6 for a linear and a quadratic function, respectively.
Entropy 19 00586 g001
Figure 2. Lower bound on s ( X ; Y ) and mCor ( X ; Y ) for α = 0.1 in (left) Example 1 (middle) Example 2 (right) Example 3 for ϵ = 0.1 .
Figure 2. Lower bound on s ( X ; Y ) and mCor ( X ; Y ) for α = 0.1 in (left) Example 1 (middle) Example 2 (right) Example 3 for ϵ = 0.1 .
Entropy 19 00586 g002
Figure 3. Sample data points for eight functions with/without a potential correlation for n = 320 .
Figure 3. Sample data points for eight functions with/without a potential correlation for n = 320 .
Entropy 19 00586 g003
Figure 4. Power vs. noise level for α = 0.05 , n = 320 .
Figure 4. Power vs. noise level for α = 0.05 , n = 320 .
Entropy 19 00586 g004
Figure 5. Power vs. noise level for α = 0.1 , n = 320 .
Figure 5. Power vs. noise level for α = 0.1 , n = 320 .
Entropy 19 00586 g005
Figure 6. Power vs. noise level for α = 0.1 and n = 320 (A,D), corresponding examples of an independent dataset (B,E) and a correlated dataset (C,F).
Figure 6. Power vs. noise level for α = 0.1 and n = 320 (A,D), corresponding examples of an independent dataset (B,E) and a correlated dataset (C,F).
Entropy 19 00586 g006
Figure 7. Power vs. α (fraction of correlated samples) for n = 320 , σ 2 = 0.1 .
Figure 7. Power vs. α (fraction of correlated samples) for n = 320 , σ 2 = 0.1 .
Entropy 19 00586 g007
Figure 8. Power vs. n (number of samples) for α = 0.05 , σ 2 = 0.1 .
Figure 8. Power vs. n (number of samples) for α = 0.05 , σ 2 = 0.1 .
Entropy 19 00586 g008
Figure 9. (A,D) Scatter plot of correlation measures; (B) Correlations are small; (C) Correlations are large; (EH) Only the hypercontractivity coefficient discovers potential correlation; (I) Hypercontractivity discovers potential correlation; (J) Hypercontractivity and Pearson correlation are large because of an outlier.
Figure 9. (A,D) Scatter plot of correlation measures; (B) Correlations are small; (C) Correlations are large; (EH) Only the hypercontractivity coefficient discovers potential correlation; (I) Hypercontractivity discovers potential correlation; (J) Hypercontractivity and Pearson correlation are large because of an outlier.
Entropy 19 00586 g009
Figure 10. Samples for the pair of indicators shown in Figure 9E from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Figure 10. Samples for the pair of indicators shown in Figure 9E from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Entropy 19 00586 g010
Figure 11. Samples for the pair of indicators shown in Figure 9F from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Figure 11. Samples for the pair of indicators shown in Figure 9F from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Entropy 19 00586 g011
Figure 12. Samples for the pair of indicators shown in Figure 9G from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Figure 12. Samples for the pair of indicators shown in Figure 9G from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Entropy 19 00586 g012
Figure 13. Samples for the pair of indicators shown in Figure 9H from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Figure 13. Samples for the pair of indicators shown in Figure 9H from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Entropy 19 00586 g013
Figure 14. Samples for the pair of indicators shown in Figure 9I from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Figure 14. Samples for the pair of indicators shown in Figure 9I from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Entropy 19 00586 g014
Figure 15. Samples for the pair of indicators shown in Figure 9J from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Figure 15. Samples for the pair of indicators shown in Figure 9J from the entire WHO dataset (left), without one outlier (middle), and without two outliers (right).
Entropy 19 00586 g015
Figure 16. Hypercontractivity and other correlation measures become smaller as we remove an outlier.
Figure 16. Hypercontractivity and other correlation measures become smaller as we remove an outlier.
Entropy 19 00586 g016
Figure 17. Synthetic data: (left) an outlier is located far from other samples; (middle) an outlier is located close to the rest of samples; (right) potential correlation exists.
Figure 17. Synthetic data: (left) an outlier is located far from other samples; (middle) an outlier is located close to the rest of samples; (right) potential correlation exists.
Entropy 19 00586 g017
Figure 18. Scatter plots of gene pathway data for various pair of data and various time points (regular T-cells).
Figure 18. Scatter plots of gene pathway data for various pair of data and various time points (regular T-cells).
Entropy 19 00586 g018
Figure 19. Scatter plots of gene pathway data for various pair of data and various time points (T-cells exposed with an antigen).
Figure 19. Scatter plots of gene pathway data for various pair of data and various time points (T-cells exposed with an antigen).
Entropy 19 00586 g019
Figure 20. Accuracy vs. subsampling rate. Hypercontractivity method has higher probability to recover the trend when data size is smaller compared to other methods. (left) regular T-cells; (right) T-cells exposed with an antigen [7].
Figure 20. Accuracy vs. subsampling rate. Hypercontractivity method has higher probability to recover the trend when data size is smaller compared to other methods. (left) regular T-cells; (right) T-cells exposed with an antigen [7].
Entropy 19 00586 g020
Table 1. Correlation estimates for independent and correlated samples from Figure 3.
Table 1. Correlation estimates for independent and correlated samples from Figure 3.
CordCormCorMICHC
#Function α σ 2 DepIndepDepIndepDepIndepDepIndepDepIndep
1Linear0.050.030.030.000.190.110.060.040.210.170.180.08
2Quadratic0.100.100.000.010.090.100.070.020.210.180.080.04
3Cubic0.100.000.020.000.160.080.090.030.260.170.110.04
4sin( 4 π X )0.050.030.000.000.100.060.030.010.200.180.100.04
5sin( 16 π X )0.100.000.000.000.070.080.030.030.180.220.030.03
6 X 1 / 4 0.050.010.010.000.120.070.020.010.200.200.120.04
7Circle0.100.000.000.000.090.050.010.030.160.170.060.01
8Step func.0.100.030.000.000.130.070.040.020.200.170.110.04

Share and Cite

MDPI and ACS Style

Kim, H.; Gao, W.; Kannan, S.; Oh, S.; Viswanath, P. Discovering Potential Correlations via Hypercontractivity. Entropy 2017, 19, 586. https://doi.org/10.3390/e19110586

AMA Style

Kim H, Gao W, Kannan S, Oh S, Viswanath P. Discovering Potential Correlations via Hypercontractivity. Entropy. 2017; 19(11):586. https://doi.org/10.3390/e19110586

Chicago/Turabian Style

Kim, Hyeji, Weihao Gao, Sreeram Kannan, Sewoong Oh, and Pramod Viswanath. 2017. "Discovering Potential Correlations via Hypercontractivity" Entropy 19, no. 11: 586. https://doi.org/10.3390/e19110586

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop