Next Article in Journal
Crystal Symmetry-Inspired Algorithm for Optimal Design of Contemporary Mono Passivated Emitter and Rear Cell Solar Photovoltaic Modules
Next Article in Special Issue
Text Indexing for Faster Gapped Pattern Matching
Previous Article in Journal
Logical Execution Time and Time-Division Multiple Access in Multicore Embedded Systems: A Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hardness and Approximability of Dimension Reduction on the Probability Simplex

Department of Computer Science, University of Salerno, 84084 Fisciano, Italy
Algorithms 2024, 17(7), 296; https://doi.org/10.3390/a17070296
Submission received: 17 May 2024 / Revised: 25 June 2024 / Accepted: 4 July 2024 / Published: 6 July 2024
(This article belongs to the Special Issue Selected Algorithmic Papers from IWOCA 2024)

Abstract

:
Dimension reduction is a technique used to transform data from a high-dimensional space into a lower-dimensional space, aiming to retain as much of the original information as possible. This approach is crucial in many disciplines like engineering, biology, astronomy, and economics. In this paper, we consider the following dimensionality reduction instance: Given an n-dimensional probability distribution p and an integer m < n , we aim to find the m-dimensional probability distribution q that is the closest to p, using the Kullback–Leibler divergence as the measure of closeness. We prove that the problem is strongly NP-hard, and we present an approximation algorithm for it.

1. Introduction

Dimension reduction [1,2] is a methodology for mapping data from a high-dimensional space to a lower-dimensional space, while approximately preserving the original information content. This process is essential in fields such as engineering, biology, astronomy, and economics, where large datasets with high-dimensional points are common.
It is often the case that the computational complexity of the algorithms employed to extract relevant information from these datasets depends on the dimension of the space where the points lie. Therefore, it is important to find a representation of the data in a lower-dimensional space that still (approximately) preserves the information content of the original data, as per given criteria.
A special case of the general issue illustrated before arises when the elements of the dataset are n-dimensional probability distributions, and the problem is to approximate them by lower-dimensional ones. This question has been extensively studied in different contexts. In [3,4], the authors address the problem of dimensionality reduction on sets of probability distributions with the aim of preserving specific properties, such as pairwise distances. In [5], Gokhale considers the problem of finding the distribution that minimizes, subject to a set of linear constraints on the probabilities, the “discrimination information” with respect to a given probability distribution. Similarly, in [6], Globerson et al. address the dimensionality reduction problem by introducing a nonlinear method aimed at minimizing the loss of mutual information from the original data. In [7], Lewis explores dimensionality reduction for reducing storage requirements and proposes an approximation method based on the maximum entropy criterion. Likewise, in [8], Adler et al. apply dimensionality reduction to storage applications, focusing on the efficient representation of large-alphabet probability distributions. More closely related to the dimensionality reduction that we deal with in this paper are the works [9,10,11,12]. In [10,11], the authors address task scheduling problems where the objective is to allocate tasks of a project in a way that maximizes the likelihood of completing the project by the deadline. They formalize the problem in terms of random variables approximation by using the Kolmogorov distance as a measure of distance and present an optimal algorithm for the problem. In contrast, in [12], Vidyasagar defines a metric distance between probability distributions on two distinct finite sets of possibly different cardinalities based on the Minimum Entropy Coupling (MEC) problem. Informally, in the MEC, given two probability distributions p and q, one seeks to find a joint distribution ϕ that has p and q as marginal distributions and also has minimum entropy. Unfortunately, computing the MEC is NP-hard, as shown in [13]. However, numerous works in the literature present efficient algorithms for computing couplings with entropy within a constant number of bits from the optimal value [14,15,16,17,18]. We note that computing the coupling of a pair of distributions can be seen as essentially the inverse of dimension reduction. Specifically, given two distributions p and q, one constructs a third, larger distribution ϕ , such that p and q are derived from ϕ or, more formally, aggregations of ϕ . In contrast, the dimension reduction problem addressed in this paper involves starting with a distribution p and creating another, smaller distribution that is derived from p or, more formally, is an aggregation of p.
Moreover, in [12], the author demonstrates that, according to the defined metric, any optimal reduced-order approximation must be an aggregation of the original distribution. Consequently, the author provides an approximation algorithm based on the total variation distance, using an approach similar to the one we will employ in Section 4. Similarly, in [9], Cicalese et al. examine dimensionality reduction using the same distance metric introduced in [12]. They propose a general criterion for approximating p with a shorter vector q, based on concepts from Majorization theory, and provide an approximation approach to solve the problem.
We also mention that analogous problems arise in scenario reduction [19], where the problem is to (best) approximate a given discrete distribution with another distribution with fewer atoms in compressing probability distributions [20] and elsewhere [21,22,23]. Moreover, we recommend the following survey for further application examples [24].
In this paper, we study the following instantiation of the general problem described before: Given an n-dimensional probability distribution p = ( p 1 , , p n ) , and m < n , find the m-dimensional probability distribution q = ( q 1 , , q m ) that is the closest to p, where the measure of closeness is the well-known relative entropy [25] (also known as Kullback–Leibler divergence). In Section 2, we formally state the problem. In Section 3, we prove that the problem is strongly NP-hard, and in Section 4, we provide an approximation algorithm returning a solution whose distance from p is at most 1 plus the minimum possible distance.

2. Statement of the Problem and Mathematical Preliminaries

Let
P n = { p = ( p 1 , , p n ) p 1 p n > 0 , i = 1 n p i = 1 }
be the ( n 1 ) -dimensional probability simplex. Given two probability distributions p P n and q P m , with m < n , we say that q is an aggregation of p if each component of q can be expressed as the sum of distinct components of p. More formally, q is an aggregation of p if there exists a partition  Π = ( Π 1 , , Π m ) of { 1 , , n } such that q i = j Π i p j , for each i = 1 , , m . Notice that the aggregation operation corresponds to the following operation on random variables: Given a random variable X that takes value in a finite set X = { x 1 , , x n } , such that Pr { X = x i } = p i for i = 1 , , n , any function f : X Y , with Y = { y 1 , , y m } and m < n , induces a random variable f ( X ) whose probability distribution q = ( q 1 , , q m ) is an aggregation of p. Dimension reduction in random variables through the application of deterministic functions is a common technique in the area (e.g., [10,12,26]). Additionally, the problem arises also in the area of “hard clustering” [27] where one seeks a deterministic mapping f from data, generated by an r.v. X taking values in a set X , to “labels” in some set Y , where typically | Y | | X | .
For any probability distribution p P n and an integer m < n , let us denote by A m ( p ) the set of all q P m that are aggregations of p. Our goal is to solve the following optimization problem:
Problem 1.
Given p P n and m < n , find q * A m ( p ) such that
min q A m ( p ) D ( q p ) = D ( q * p ) ,
where D ( q p ) is the relative entropy [25], given by
D ( q p ) = i = 1 m q i log q i p i ,
and the logarithm is of base 2.
An additional motivation to study Problem 1 comes from the fundamental paper [28], in which the principle of minimum relative entropy (called therein minimum cross entropy principle) is derived in an axiomatic manner. The principle states that, of the distributions q that satisfy given constraints (in our case, that q A m ( p ) ), one should choose the one with the least relative entropy “distance” from the prior p.
Before establishing the computational complexity of the Problem 1, we present a simple lower bound on the optimal value.
Lemma 1.
For each p P n and q P m , m < n , it holds that
D ( q p ) D ( l b ( p ) p ) = log i = 1 m p i .
where
l b ( p ) = p 1 i = 1 m p i , , p m i = 1 m p i P m .
Proof. 
Given an arbitrary p P n , one can see that
D ( l b ( p ) p ) = log i = 1 m p i .
Moreover, for any  p P n  and  q P m , the Jensen inequality applied to the log function gives the following:
D ( q | | p ) = i = 1 m q i log p i q i log i = 1 m p i .
   □

3. Hardness

In this section, we prove that the optimization problem (2) described in Section 1 is strongly NP-hard. We accomplish this by reducing the problem from the 3-Partition problem, a well-known strongly NP-hard problem [29], described as follows.
3-Partition: Given a multiset S = { a 1 , , a n } of n = 3 m positive integers for which i = 1 n a i = m T , for some T, the problem is to decide whether S can be partitioned into m triplets such that the sum of each triple is exactly T. More formally, the problem is to decide whether there exist S 1 , , S m S such that the following conditions hold:
a S j a = T , j { 1 , , m } , S i S j = , i j , i = 1 m S i = S , | S i | = 3 , i { 1 , , m } .
Theorem 1.
The  3-Partition  problem can be reduced in polynomial time to the problem of finding the aggregation q * P m of some p P n , for which
D ( q * p ) = min q A m ( p ) D ( q p ) .
Proof. 
The idea behind the following reduction can be summarized as follows: given an instance of 3-Partition, we transform it into a probability distribution p such that the lower bound l b ( p ) is an aggregation of p if and only if the original instance of 3-Partition admits a solution. Let an arbitrary instance of 3-Partition be given, that is, let S be a multiset { a 1 , , a n } of n = 3 m positive integers with i = 1 n a i = m T . Without loss of generality, we assume that the integers a i are ordered in a non-increasing fashion. We construct a valid instance p of our Problem 1 as follows. We set p P n + m as follows:
p = 1 m + 1 , , 1 m + 1 m   times , a 1 + 2 T ( m + 1 ) 7 m T , , a n + 2 T ( m + 1 ) 7 m T .
Note that p is a probability distribution. In fact, since n = 3 m , we have
i = 1 n a i + 2 T ( m + 1 ) 7 m T = 1 ( m + 1 ) 7 m T i = 1 n ( a i + 2 T ) = 7 m T ( m + 1 ) 7 m T = 1 m + 1 .
Moreover, from (4) and (5), the probability distribution l b ( p ) P m associated to p is as follows:
l b ( p ) = p 1 j = 1 m p j , , p m j = 1 m p j = 1 m , , 1 m .
To prove the theorem, we show that the starting instance of 3-Partition is a Yes instance if and only if it holds that
min q A m ( p ) D ( q p ) = log m + 1 m ,
where p is given in (5).
We begin by assuming the given instance of 3-Partition is a Yes instance, that is, there is a partition of S into triplets S 1 , , S m such that
a i S j a i = T , j { 1 , , m } ,
and we show that min q A m ( p ) D ( q p ) = log m + 1 m . By Lemma 1, (5), and equality (6), we have
min q A m ( p ) D ( q p ) D ( l b ( p ) p ) = i = 1 m 1 m log 1 / m 1 / ( m + 1 ) = log m + 1 m .
From (8), we have
a i S j a i + 2 T ( m + 1 ) 7 m T = T ( m + 1 ) 7 m T + a i S j 2 T ( m + 1 ) 7 m T = T ( m + 1 ) 7 m T + 6 T ( m + 1 ) 7 m T = 1 ( m + 1 ) m , j { 1 , , m } .
Let us define q P m as follows:
q = 1 m + 1 + a i S 1 a i + 2 T ( m + 1 ) 7 m T , , 1 m + 1 + a i S m a i + 2 T ( m + 1 ) 7 m T ,
where, by (10),
a i S j a i + 2 T ( m + 1 ) 7 m T = 1 ( m + 1 ) m , j { 1 , , m } .
From (12) and from the fact that S 1 , , S m are a partition of { a 1 , , a n } , we obtain q A m ( p ) , that is, q is a valid aggregation of p (cfr., (5)). Moreover,
q = 1 m , , 1 m ,
and D ( q p ) = log m + 1 m . Therefore, by (9) and that q A m ( p ) , we obtain
min q A m ( p ) D ( q p ) = log m + 1 m ,
as required.
To prove the opposite implication, we assume that p (as given in (5)) is a Yes instance, that is,
min q A m ( p ) D ( q p ) = log m + 1 m .
We show that the original instance of 3-Partition is also a Yes instance, that is, there is a partition of S into triplets S 1 , , S m such that
a i S j a i = T , j { 1 , , m } .
Let q * be the element in A m ( p ) that achieves the minimum in (13). Consequently, we have
log m + 1 m = D ( q * p ) = i = 1 m q i * log q i * p i = i = 1 m q i * log 1 p i H ( q * ) = log ( m + 1 ) H ( q * ) ( from ( 5 ) ) ,
where H ( q * ) = i = 1 m q i * log q i * is the Shannon entropy of q * . From (15), we obtain that H ( q * ) = log m ; hence, q * = ( 1 / m , , 1 / m ) (see [30], Thm. 2.6.4). Recalling that q * A m ( p ) , we obtain that the uniform distribution
1 m , , 1 m
is an aggregation of p. We note that the first m components of p, as defined in (5), cannot be aggregated among them to obtain (16), because 2 / ( m + 1 ) > 1 / m , for m > 2 . Therefore, in order to obtain (16) as an aggregation of p, there must exist a partition S 1 , , S m of S = { a 1 , , a n } for which
1 m + 1 + a i S j a i + 2 T ( m + 1 ) 7 m T = 1 m , j { 1 , , m } .
From (17), we obtain
a i S j a i + 2 T ( m + 1 ) 7 m T = 1 m ( m + 1 ) , j { 1 , , m } .
From this, it follows that
2 T | S j | + a i S j a i = 7 T , j { 1 , , m } .
We note that, for (19) to be true, there cannot exist any S j for which | S j | 3 . Indeed, if there were a subset S j for which | S j | 3 , there would be at least a subset S k for which | S k | > 3 . Thus, for such an S k , we would have
2 T | S k | + a i S k a i 8 T + a i S k a i > 7 T ,
contradicting (19). Therefore, it holds that
| S j | = 3 , j { 1 , , m } .
Moreover, from (19) and (20), we obtain
a i S j a i = 7 T 2 T | S j | = T , j { 1 , , m } .
Thus, from (21), it follows that the subsets S 1 , , S m give a partition of S into triplets, such that
a i S j a i = T , j { 1 , , m } .
Therefore, the starting instance of 3-Partition is a Yes instance.    □

4. Approximation

Given p P n and m < n , let O P T denote the optimal value of the optimization problem (2), that is
O P T = min q A m ( p ) D ( q p ) .
In this section, we design a greedy algorithm to compute an aggregation q ¯ A m ( p ) of p such that
D ( q ¯ p ) < O P T + 1 .
The idea behind our algorithm is to see the problem of computing an aggregation q A m ( p ) as a bin packing problem with “overstuffing” (see [31] and references therein quoted), which is a bin packing where overfilling of bins is possible. In the classical bin packing problem, one is given a set of items, with their associated weights, and a set of bins with their associated capacities (usually, equal for all bins). The objective is to place all the items in the bins, trying to minimize a given cost function.
In our case, we have n items (corresponding to the components of p) with weights p 1 , , p n , respectively, and m bins, corresponding to the components of l b ( p ) (as defined in (4)) with capacities l b ( p ) 1 , , l b ( p ) m . Our objective is to place all the n components of p into the m bins without exceeding the capacity l b ( p ) j of each bin j, j = 1 , , m , by more than ( i = 1 m p i ) l b ( p ) j . For such a purpose, the idea behind Algorithm 1 is quite straightforward. It behaves like a classical First-Fit bin packing: to place the ith item, it chooses the first bin j in which the item can be inserted without exceeding its capacity by more than ( i = 1 m p i ) l b ( p ) j . In the following, we will show that such a bin always exists and that fulfilling this objective is sufficient to ensure the approximation guarantee (23) we are seeking.
Algorithm 1: GreedyApprox
1. Compute l b ( p ) = ( p 1 / j = 1 m p j , , p m / j = 1 m p j ) ;
2. Let l b j i be the content of bin j after the first i components of p have been placed ( l b j 0 = 0 for each j { 1 , , m } );
3. For i = 0 , , n 1
        Let j be the smallest bin index for which holds that
        l b j i + p i + 1 < ( 1 + j = 1 m p j ) l b ( p ) j , place p i + 1 into the j-th bin:
            l b j i + 1 = l b j i + p i + 1 ,
            l b k i + 1 = l b k i , for each k j ;
4. Output q ¯ = ( l b 1 n , , l b m n ) .
The step 3 of GreedyApprox operates as in the classical First-Fit bin packing algorithm. Therefore, it can be implemented to run in O ( n log m ) time, as discussed in [32]. In fact, each iteration of the loop in step 3 can be implemented in O ( log m ) -time by using a balanced binary search tree with height O ( log m ) that has a leaf for each bin and in which each node keeps track of the largest remaining capacity of all the bins in its subtree.
Lemma 2.
GreedyApprox computes a valid aggregation q ¯ A m ( p ) of p P n . Moreover, it holds that
D ( q ¯ l b ( p ) ) < log 1 + j = 1 m p j .
Proof. 
We first prove that each component p i of p is placed in some bin. This implies that q ¯ A m ( p ) .
For each step i = 0 , , m 1 , there is always a bin in which the algorithm places p i + 1 . In fact, the capacity l b ( p ) j of bin j satisfies the relation:
l b ( p ) j = p j = 1 m p > p j , j { 1 , , m } .
Let us consider an arbitrary step m i < n , in which the algorithm has placed the first i components of p and needs to place p i + 1 into some bin. We show that, in this case also, there is always a bin j in which the algorithm places the item p i + 1 , without exceeding the capacity l b ( p ) j of the bin j by more than ( = 1 m p ) l b ( p ) j .
First, notice that in each step i, m i < n , there is at least a bin k whose content l b k i does not exceed its capacity l b ( p ) k ; that is, for which l b k i < l b ( p ) k holds. Were this the opposite, for all bins j, we would have l b j i l b ( p ) j ; then, we would also have
j = 1 m l b j i j = 1 m l b ( p ) j = 1 .
However, this is not possible since we have placed only the first i < n components of p, and therefore, it holds that
j = 1 m l b j i = j = 1 i p j < j = 1 n p j = 1 ,
contradicting (25). Consequently, let k be the smallest integer for which the content of the k-th bin does not exceed its capacity, i.e., for which l b k i < l b ( p ) k . For such a bin k, we obtain
1 + j = 1 m p j l b ( p ) k = l b ( p ) k + j = 1 m p j l b ( p ) k = l b ( p ) k + j = 1 m p j p k j = 1 m p j = l b ( p ) k + p k > l b k i + p k ( since   l b ( p ) k > l b k i ) l b k i + p i + 1 ( since   p k p i + 1 ) .
Thus, from (26), one derives that the algorithm places p i + 1 into the bin k without exceeding its capacity l b ( p ) k by more than ( j = 1 m p j ) l b ( p ) k .
The reasoning applies to each i < n , thus proving that GreedyApprox correctly assigns each component p i of p to a bin, effectively computing an aggregation of p. Moreover, from the instructions of step 3 of GreedyApprox, the output is an aggregation q ¯ = ( q ¯ 1 , , q ¯ m ) A m ( p ) , for which the following crucial relation holds:
q ¯ i < 1 + j = 1 m p j l b ( p ) i , i { 1 , , m } .
Let us now prove that D ( q ¯ l b ( p ) ) < log 1 + j = 1 m p j . We have
D ( q ¯ l b ( p ) ) = i = 1 m q ¯ i log q ¯ i l b ( p ) i < i = 1 m q ¯ i log ( 1 + j = 1 m p j ) l b ( p ) i l b ( p ) i ( from   ( 27 ) ) = log 1 + j = 1 m p j .
We need the following technical lemma to show the approximation guarantee of GreedyApprox.
Lemma 3.
Let q P m and p P n be two arbitrary probability distributions with m < n . It holds that
D ( q p ) = D ( q l b ( p ) ) + D ( l b ( p ) p ) ,
where l b ( p ) = ( l b ( p ) 1 , , l b ( b ) m ) = ( p 1 / i = 1 m p i , , p m / i = 1 m p i ) .
Proof. 
D ( q p ) = i = 1 m q i log q i p i = i = 1 m q i log q i p i j = 1 m p j j = 1 m p j = i = 1 m q i log q i p i j = 1 m p j + i = 1 m q i log 1 j = 1 m p j = i = 1 m q i log q i l b ( p ) i + i = 1 m q i log 1 j = 1 m p j ( since   l b ( p ) i = p i / j = 1 m p j ) = D ( q l b ( p ) ) + log 1 j = 1 m p j = D ( q l b ( p ) ) + D ( l b ( p ) p ) .
The following theorem is the main result of this section.
Theorem 2.
For any p P n and m < n , GreedyApprox produces an aggregation q ¯ A m ( p ) of p such that
D ( q ¯ p ) < O P T + 1 ,
where O P T = min q A m ( p ) D ( q p ) .
Proof. 
From Lemma 3, we have
D ( q ¯ p ) = D ( q ¯ l b ( p ) ) + D ( l b ( p ) p ) ,
and from Theorem 2, we know that the produced aggregation q ¯ of p satisfies the relation
D ( q ¯ l b ( p ) ) < log 1 + j = 1 m p j .
Putting it all together, we obtain:
D ( q ¯ p ) = D ( q ¯ l b ( p ) ) + D ( l b ( p ) p ) < log 1 + j = 1 m p j + D ( l b ( p ) p ) ( from   ( 31 ) ) = log 1 + j = 1 m p j log j = 1 m p j < log j = 1 m p j + 1 ( since   1 + j = 1 m p j < 2 ) O P T + 1 ( from   Lemma   1 ) .

5. Concluding Remarks

In this paper, we examined the problem of approximating n-dimensional probability distributions with m-dimensional ones using the Kullback–Leibler divergence as the measure of closeness. We demonstrated that this problem is strongly NP-hard and introduced an approximation algorithm for solving the problem with guaranteed performance.
Moreover, we conclude by pointing out that the analysis of GreedyApprox presented in Theorem 2 is tight. Let p P 3 be
p = 1 2 ϵ , 1 2 ϵ , 2 ϵ ,
where ϵ > 0 . The application of GreedyApprox on p produces the aggregation q ¯ P 2 given by
q ¯ = ( 1 2 ϵ , 2 ϵ ) ,
whereas one can see that the optimal aggregation q * P 2 is equal to
q * = 1 2 + ϵ , 1 2 ϵ .
Hence, for ϵ 0 , we have
D ( q ¯ p ) = ( 1 2 ϵ ) log 1 2 ϵ 1 2 ϵ + 2 ϵ log 2 ϵ 1 2 ϵ 1 ,
while
O P T = D ( q * p ) = 1 2 + ϵ log 1 2 + ϵ 1 2 ϵ + 1 2 ϵ log 1 2 ϵ 1 2 ϵ 0 .
Therefore, to improve our approximation guarantee, one should use a bin packing heuristic different from the First-Fit as employed in GreedyApprox. Another interesting open problem is to provide an approximation algorithm with a (small) multiplicative approximation guarantee. However, both problems mentioned above would probably require a different approach, and we leave that to future investigations.
Another interesting line of research would be to extend our findings to different divergence measures (e.g., [33] and references quoted therein).

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The author wants to express his gratitude to Ugo Vaccaro for guidance throughout this research, to the anonymous referees, and to the Academic Editor for many useful suggestions that have improved the presentation of the paper.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Burges, C.J. Dimension reduction: A guided tour. Found. Trends Mach. Learn. 2010, 2, 275–365. [Google Scholar] [CrossRef]
  2. Sorzano, C.O.S.; Vargas, J.; Montano, A.P. A survey of dimensionality reduction techniques. arXiv 2014, arXiv:1403.2877. [Google Scholar]
  3. Abdullah, A.; Kumar, R.; McGregor, A.; Vassilvitskii, S.; Venkatasubramanian, S. Sketching, Embedding, and Dimensionality Reduction for Information Spaces. Artif. Intell. Stat. PMLR 2016, 51, 948–956. [Google Scholar]
  4. Carter, K.M.; Raich, R.; Finn, W.G.; Hero, A.O., III. Information-geometric dimensionality reduction. IEEE Signal Process. Mag. 2011, 28, 89–99. [Google Scholar] [CrossRef]
  5. Gokhale, D.V. Approximating discrete distributions, with applications. J. Am. Stat. Assoc. 1973, 68, 1009–1012. [Google Scholar] [CrossRef]
  6. Globerson, A.; Tishby, N. Sufficient dimensionality reduction. J. Mach. Learn. Res. 2003, 3, 1307–1331. [Google Scholar]
  7. Lewis, P.M., II. Approximating probability distributions to reduce storage requirements. Inf. Control. 1959, 2, 214–225. [Google Scholar] [CrossRef]
  8. Adler, A.; Tang, J.; Polyanskiy, Y. Efficient representation of large-alphabet probability distributions. IEEE Sel. Areas Inf. Theory 2022, 3, 651–663. [Google Scholar] [CrossRef]
  9. Cicalese, F.; Gargano, L.; Vaccaro, U. Approximating probability distributions with short vectors, via information theoretic distance measures. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 1138–1142. [Google Scholar]
  10. Cohen, L.; Grinshpoun, T.; Weiss, G. Efficient optimal Kolmogorov approximation of random variables. Artif. Intell. 2024, 329, 104086. [Google Scholar] [CrossRef]
  11. Cohen, L.; Weiss, G. Efficient optimal approximation of discrete random variables for estimation of probabilities of missing deadlines. Proc. Aaai Conf. Artif. Intell. 2019, 33, 7809–7815. [Google Scholar] [CrossRef]
  12. Vidyasagar, M. A metric between probability distributions on finite sets of different cardinalities and applications to order reduction. IEEE Trans. Autom. Control. 2012, 57, 2464–2477. [Google Scholar] [CrossRef]
  13. Kovačević, M.; Stanojević, I.; Šenk, V. On the entropy of couplings. Inf. Comput. 2015, 242, 369–382. [Google Scholar] [CrossRef]
  14. Cicalese, F.; Gargano, L.; Vaccaro, U. Minimum-entropy couplings and their applications. IEEE Trans. Inf. Theory 2019, 65, 3436–3451. [Google Scholar] [CrossRef]
  15. Compton, S. A tighter approximation guarantee for greedy minimum entropy coupling. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Espoo, Finland, 26 June–1 July 2022; pp. 168–173. [Google Scholar]
  16. Compton, S.; Katz, D.; Qi, B.; Greenewald, K.; Kocaoglu, M. Minimum-entropy coupling approximation guarantees beyond the majorization barrier. Int. Conf. Artif. Intell. Stat. 2023, 206, 10445–10469. [Google Scholar]
  17. Li, C. Efficient approximate minimum entropy coupling of multiple probability distributions. IEEE Trans. Inf. Theory 2021, 67, 5259–5268. [Google Scholar] [CrossRef]
  18. Sokota, S.; Sam, D.; Witt, C.; Compton, S.; Foerster, J.; Kolter, J. Computing Low-Entropy Couplings for Large-Support Distributions. arXiv 2024, arXiv:2405.19540. [Google Scholar]
  19. Rujeerapaiboon, N.; Schindler, K.; Kuhn, D.; Wiesemann, W. Scenario reduction revisited: Fundamental limits and guarantees. Math. Program. 2018, 191, 207–242. [Google Scholar] [CrossRef]
  20. Gagie, T. Compressing probability distributions. Inf. Process. Lett. 2006, 97, 133–137. [Google Scholar] [CrossRef]
  21. Cohen, L.; Fried, D.; Weiss, G. An optimal approximation of discrete random variables with respect to the Kolmogorov distance. arXiv 2018, arXiv:1805.07535. [Google Scholar]
  22. Pavlikov, K.; Uryasev, S. CVaR distance between univariate probability distributions and approximation problems. Ann. Oper. Res. 2018, 262, 67–88. [Google Scholar] [CrossRef]
  23. Pflug, G.C.; Pichler, A. Approximations for probability distributions and stochastic optimization problems. In Stochastic Optimization Methods in Finance and Energy: New Financial Products and Energy Market Strategies; Springer: Berlin/Heidelberg, Germany, 2011; pp. 343–387. [Google Scholar]
  24. Melucci, M. A brief survey on probability distribution approximation. Comput. Sci. Rev. 2019, 33, 91–97. [Google Scholar] [CrossRef]
  25. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  26. Lamarche-Perrin, R.; Demazeau, Y.; Vincent, J.M. The best-partitions problem: How to build meaningful aggregations. In Proceedings of the IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), Atlanta, GA, USA, 17–20 November 2013; pp. 399–404. [Google Scholar]
  27. Kearns, M.; Mansour, Y.; Ng, A.Y. An information-theoretic analysis of hard and soft assignment methods for clustering. In Learning in Graphical Models; Springer: Dordrecht, The Netherlands, 1998; pp. 495–520. [Google Scholar]
  28. Shore, J.; Johnson, R. Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. IEEE Trans. Inf. Theory 1980, 26, 26–37. [Google Scholar] [CrossRef]
  29. Garey, M.; Johnson, D. Strong NP-Completeness results: Motivation, examples, and implications. J. ACM 1978, 25, 499–508. [Google Scholar] [CrossRef]
  30. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley-Interscience: Hoboken, NJ, USA, 2006. [Google Scholar]
  31. Dell’Olmo, P.; Kellerer, H.; Speranza, M.; Tuza, Z. A 13/12 approximation algorithm for bin packing with extendable bins. Inf. Process. Lett. 1998, 65, 229–233. [Google Scholar] [CrossRef]
  32. Coffman, E.G.; Garey, M.R.; Johnson, D.S. Approximation Algorithms for Bin Packing: A Survey. In Approximation Algorithms for NP-Hard Problems; Hochbaum, D., Ed.; PWS Publishing Co.: Worcester, UK, 1996; pp. 46–93. [Google Scholar]
  33. Sason, I. Divergence Measures: Mathematical Foundations and Applications in Information-Theoretic and Statistical Problems. Entropy 2022, 24, 712. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bruno, R. Hardness and Approximability of Dimension Reduction on the Probability Simplex. Algorithms 2024, 17, 296. https://doi.org/10.3390/a17070296

AMA Style

Bruno R. Hardness and Approximability of Dimension Reduction on the Probability Simplex. Algorithms. 2024; 17(7):296. https://doi.org/10.3390/a17070296

Chicago/Turabian Style

Bruno, Roberto. 2024. "Hardness and Approximability of Dimension Reduction on the Probability Simplex" Algorithms 17, no. 7: 296. https://doi.org/10.3390/a17070296

APA Style

Bruno, R. (2024). Hardness and Approximability of Dimension Reduction on the Probability Simplex. Algorithms, 17(7), 296. https://doi.org/10.3390/a17070296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop