Next Article in Journal
Advances in Adjoint Functions of Connection Number in Water Resources Complex Systems: A Systematic Review
Next Article in Special Issue
Bipartite Unique Neighbour Expanders via Ramanujan Graphs
Previous Article in Journal
On the Supposed Mass of Entropy and That of Information
Previous Article in Special Issue
Entropic Bounds on the Average Length of Codes with a Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Side Information Design in Zero-Error Coding for Computing

1
Univ. Rennes, CNRS, IRMAR UMR 6625, F-35000 Rennes, France
2
Univ. Rennes, CNRS, Inria, IRISA UMR 6074, F-35000 Rennes, France
3
INRIA Rennes, Campus de Beaulieu, F-35000 Rennes cedex, France
*
Authors to whom correspondence should be addressed.
Entropy 2024, 26(4), 338; https://doi.org/10.3390/e26040338
Submission received: 20 December 2023 / Revised: 8 April 2024 / Accepted: 9 April 2024 / Published: 16 April 2024
(This article belongs to the Special Issue Extremal and Additive Combinatorial Aspects in Information Theory)

Abstract

:
We investigate the zero-error coding for computing problems with encoder side information. An encoder provides access to a source X and is furnished with side information g ( Y ) . It communicates with a decoder that possesses side information Y and aims to retrieve f ( X , Y ) with zero probability of error, where f and g are assumed to be deterministic functions. In previous work, we determined a condition that yields an analytic expression for the optimal rate R * ( g ) ; in particular, it covers the case where P X , Y is full support. In this article, we review this result and study the side information design problem, which consists of finding the best trade-offs between the quality of the encoder’s side information g ( Y ) and R * ( g ) . We construct two greedy algorithms that give an achievable set of points in the side information design problem, based on partition refining and coarsening. One of them runs in polynomial time.

1. Introduction

1.1. Zero-Error Coding for Computing

The problem of Figure 1 is a zero-error setting that relates to Orlitsky and Roche’s coding for computing problems from [1]. This coding problem appears in video compression [2,3], where X n models a set of images known at the encoder. The decoder does not always want to retrieve each whole image. Instead, the decoder receives, for each image X t , t n , a request Y t to retrieve information f ( X t , Y t ) . This information can, for instance, be a detection: cat, dog, car, bike; or a scene recognition: street/city/mountain, etc. The encoder does not know the decoder’s exact request but has prior information about it (e.g., type of request), which is modeled by ( g ( Y t ) ) t n . This problem also relates to the zero-error Slepian–Wolf open problem, which corresponds to the special case, where g is constant and f ( X , Y ) = X .
Similar schemes to the one depicted in Figure 1 have already been studied, but they differ from the one we are studying in two ways. First, they consider that no side information is available to the encoder. Second, and more importantly, they consider different coding constraints: the lossless case is studied by Orlitsky and Roche in [1], the lossy case by Yamamoto in [4], and the zero-error “unrestricted inputs” case by Shayevitz in [5]. The latter results can be used as bounds for our problem depicted in Figure 1, but do not exactly characterize its optimal rate.
Numerous extensions of the problem depicted in Figure 1 have been studied recently. The distributed context, for instance, has an additional encoder that encodes Y before transmitting it to the decoder. Achievability schemes have been proposed for this setting by Krithivasan and Pradhan in [6] using abelian groups; by Basu et al. in [7] using hypergraphs for the case with maximum distortion criterion; and by Malak and Médard in [8] using hyperplane separations for the continuous lossless case.
Another related context is the network setting, where the function of source random variables from source nodes has to be retrieved at the sink node of a given network. For tree networks, the feasible rate region is characterized by Feizi and Médard in [9] for networks of depth one, and by Sefidgaran and Tchamkerten in [10] under a Markov source distribution hypothesis. In [11], Ravi and Dey consider a bidirectional relay with zero-error “unrestricted inputs” and characterize the rate region for a specific class of functions. In [12], Guang et al. study zero-error function computation on acyclic networks with limited capacities, and give an inner bound based on network cut-sets. For both distributed and network settings, the zero-error coding for computing problems with encoder side information remains open.
In a previous work [13], we determined a condition that we called “pairwise shared side information” such that, if satisfied, the optimal rate R * ( g ) has a single-letter expression. This covers many cases of interest, in particular the case where P X , Y is full support for any functions f , g . For the sake of completeness, we review this result. Moreover, we propose an alternative and more interpretable expression for this pairwise shared side information. More precisely, we show that the instances where the “pairwise shared side information” condition is satisfied correspond to the worst possible optimal rates in an auxiliary zero-error Slepian–Wolf problem.

1.2. Encoder’s Side Information Design

In the zero-error coding for computing problems with encoder side information, it can be observed that a “coarse” encoder side information (e.g., if g constant) yields a high optimal rate R * ( g ) , whereas a “fine” encoder side information (e.g., g = Id ) yields a low optimal rate R * ( g ) . The side information design problem consists of determining the best trade-offs between the optimal rate R * ( g ) and the quality of the encoder’s side information, which is measured by its entropy H ( g ( Y ) ) . This expression describes the optimal rate of a zero-error code that transmits the quantized version of Y via the g function. The best trade-offs correspond to the Pareto front of the achievable set, i.e., whose corner-points cannot be obtained by a time sharing between other coding strategies. In short, we aim at determining the Pareto front of the convex hull of the achievable pairs H ( g ( Y ) ) , R * ( g ) .
In this article, we propose a greedy algorithm that gives an achievable set of points in the side information design problem, when P X , Y is full support. Studying our problem with the latter hypothesis is interesting because, unlike the case of the Slepian–Wolf problem, it does not necessarily correspond to a worst-case scenario. Recall indeed, that, when P X , Y is full support, the Slepian–Wolf encoder does not benefit from the side information available at the decoder and needs to send X. In our problem instead, if the retrieval function f ( X , Y ) = Y , since the decoder already has access to Y, no information needs to be sent by the encoder and the optimal rate is 0. Finally, the proposed algorithm relies on our results with “pairwise shared side information”, which gives the optimal rate for all functions g and performs a greedy partition coarsening when choosing the next achievable point. Moreover, it runs in polynomial time.
This paper is organized as follows. In Section 2, we formally present the zero-error coding for computing problems and the encoder’s side information design problem. In Section 3, we give our theoretic results on the zero-error coding for computing problems, including the “pairwise shared side information” condition. In Section 4, we present our greedy algorithms for the encoder’s side information design problem.

2. Formal Presentation of the Problem

We denote sequences by x n = ( x 1 , , x n ) . The set of probability distributions over X is denoted by Δ ( X ) . The distribution of X is denoted by P X Δ ( X ) and its support is denoted by supp P X . Given the sequence length n N , we denote by Δ n ( X ) Δ ( X ) the set of empirical distributions of sequences from X n . We denote by { 0 , 1 } * the set of binary words. The collection of subsets of a set Y is denoted by P ( Y ) .
Definition 1.
The zero-error source-coding problem of Figure 1 is described by the following:
-
Four finite sets U , X , Y , Z and a source distribution P X , Y Δ ( X × Y ) .
-
For all n N , ( X n , Y n ) is the random sequence of n copies of ( X , Y ) , drawn in an i.i.d. fashion using P X , Y .
-
Two deterministic functions
f : X × Y U ,
g : Y Z .
-
An encoder that knows X n and g ( Y t ) t n sends binary strings over a noiseless channel to a decoder that knows Y n and that wants to retrieve f ( X t , Y t ) t n without error.
A coding scheme in this setting is described by:
-
A time horizon n N and an encoding function ϕ e : X n × Z n { 0 , 1 } * such that Im ϕ e is prefix-free.
-
A decoding function ϕ d : Y n × { 0 , 1 } * U n .
-
The rate is the average length of the codeword per source symbol,
i.e., R 1 n E ϕ e X n , ( g ( Y t ) ) t n , where ℓ denotes the codeword length function.
-
n, ϕ e , ϕ d must satisfy the zero-error property:
P ϕ d Y n , ϕ e X n , ( g ( Y t ) ) t n f ( X t , Y t ) t n = 0 .
The minimal rate under the zero-error constraint is defined by
R * ( g ) inf n , ϕ e , ϕ d zero - error 1 n E ϕ e X n , ( g ( Y t ) ) t n .
The definition of the Pareto front that we give below is adapted to the encoder’s side information design problem and allows us to describe the best trade-off between the quality of the encoder side information and the rate to compute the function f ( X , Y ) at the decoder. In other works, the definition of a Pareto front may differ depending on the minimization/maximization problem considered and on the number of variables to be optimized.
Definition 2
(Pareto front). Let S R + 2 be a set, the Pareto front of S is defined by
Par ( S ) x S | x S { x } , x 1 > x 1 or x 2 > x 2 .
Definition 3.
The side information design problem in Figure 1 consists of determining the Pareto front of the achievable pairs ( H ( g ( Y ) ) , R * ( g ) ) :
F Par Conv H ( g ( Y ) ) , R * ( g ) | g : Y Z ,
where Conv denotes the convex hull.
In our zero-error setup, all alphabets are finite. Therefore, the Pareto front of the convex hull in (6) is computed on a finite set of points, which correspond to the best trade-offs for the encoder’s side information.

3. Theoretic Results

Determining the optimal rate in the zero-error coding for computing problems, with or without encoder side information, is an open problem. In a previous contribution [13], we determined a condition that, when satisfied, yields an analytic expression for the optimal rate. Interestingly, this condition is general as it does not depend on the function f to be retrieved at the decoder.

3.1. General Case

We first build the characteristic graph G [ n ] , which is a probabilistic graph that captures the zero-error encoding constraints on a given number n of source uses. It differs from the graphs used in [5], as we do not need a cartesian representation of these graphs to study the optimal rates. Furthermore, it has a vertex for each possible realization of X n , g ( Y t ) t n known at the encoder, instead of X n as in the zero-error Slepian–Wolf problem [14].
Definition 4
(Characteristic graph G [ n ] ). The characteristic graph G [ n ] is defined by the following:
-
X n × Z n as a set of vertices with distribution P X , g ( Y ) n .
-
( x n , z n ) ( x n , z n ) are adjacent if z n = z n and there exists y n g 1 ( z n ) such that
t n , P X , Y ( x t , y t ) P X , Y ( x t , y t ) > 0 ,
and   t n , f ( x t , y t ) f ( x t , y t ) ;
where g 1 ( z n ) = y n Y n | g ( y t ) t n = z n .
The characteristic graph G [ n ] is designed with the same core idea as in [15]: ( x n , z n ) and ( x n , z n ) are adjacent if there exists a side information symbol y n compatible with the observation of the encoder (i.e., z n = z n and y n g 1 ( z n ) ), such that f ( x n , y n ) f ( x n , y n ) . In order to prevent erroneous decodings, the encoder must map adjacent pairs of sequences to different codewords; hence the use of graph colorings, defined below.
Definition 5
(Coloring, independent subset). Let G = ( V , E , P V ) be a probabilistic graph. A subset S V is independent if x x E for all x , x S . Let C be a finite set (the set of colors), a mapping c : V C is a coloring if c 1 ( i ) is an independent subset for all i C .
The chromatic entropy of G [ n ] gives the best rate of n-shot zero-error encoding functions, as in [14].
Definition 6
(Chromatic entropy H χ ). The chromatic entropy of a probabilistic graph G = ( V , E , P V ) is defined by
H χ ( G ) = inf H c ( V ) | c is a coloring of G .
Theorem 1
(Optimal rate). The optimal rate is written as follows:
R * ( g ) = lim n 1 n H χ ( G [ n ] ) .
Proof. 
By construction, the following holds: for all encoding functions ϕ e , ϕ e is a coloring of G [ n ] if and only if there exists a decoding function ϕ d such that ( n , ϕ e , ϕ d ) satisfies the zero-error property. Thus, the best achievable rate is written as follows:
R * ( g ) = inf n inf ϕ e coloring of G [ n ] H ϕ e X n , g ( Y t ) t n
= lim n 1 n H χ ( G [ n ] ) .
where (12) comes from Fekete’s Lemma and from the definition of H χ .    □
A general single-letter expression for R * ( g ) is missing due to the lack of the intrinsic structure of G [ n ] . In Section 3.2, we introduce a hypothesis that gives structure to G [ n ] and allows us to derive a single-letter expression for R * ( g ) .

3.2. Pairwise Shared Side Information

Definition 7.
The distribution P X , Y and the function g satisfy the “pairwise shared side information” condition if
z Z , x , x X , y g 1 ( z ) , P X Y ( x , y ) P X Y ( x , y ) > 0 ,
where I m ( g ) is the image of the function g. This means that for all z output of g, every pair ( x , x ) “shares” at least one side information symbol y g 1 ( z ) .
Note that any full-support distribution P X , Y satisfies the “pairwise shared side information” hypothesis. In Theorem 2, we give an interpretation of the “pairwise shared side information” condition in terms of the optimal rate in an auxiliary zero-error Slepian–Wolf problem.
Theorem 2.
The tuple ( P X , Y , g ) satisfies the condition “pairwise shared side information” (13)
 ⟺  R * ( g ) = H ( X | g ( Y ) )  in the case  f ( X , Y ) = X , and for all  z Z ,  P X | g ( Y ) = z  is full support.
The proof of Theorem 2 is given in Appendix A.1.
Definition 8
(AND, OR product). Let G 1 = ( V 1 , E 1 , P V 1 ) , G 2 = ( V 2 , E 2 , P V 2 ) be two probabilistic graphs; their AND (resp. OR) product denoted by G 1 G 2 (resp. G 1 G 2 ) is defined by the following: V 1 × V 2 as a set of vertices, P V 1 P V 2 as probability distribution on the vertices, and ( v 1 v 2 ) , ( v 1 v 2 ) are adjacent if
v 1 v 1 E 1 AND v 2 v 2 E 2 , resp . ( v 1 v 1 E 1 and v 1 v 1 ) OR ( v 2 v 2 E 2 and v 2 v 2 ) ;
with the convention that all vertices are self-adjacent. We denote by G 1 n (resp. G 1 n ) the n-th AND (resp. OR) power.
AND and OR powers significantly differ in terms of existing single-letter expression for the associated asymptotic chromatic entropy. Indeed, in the zero-error Slepian–Wolf problem in [14], the optimal rate lim n 1 n H χ ( G n ) , which relies on an AND power, does not have a single-letter expression. Instead, closed-form expressions for OR powers of graphs exist. More precisely, as recalled in Proposition 1, lim n 1 n H χ ( G n ) admits a single-letter expression called the Körner graph entropy, introduced in [16], and defined below. This observation is key for us to derive a single-letter expression for our problem. More precisely, by using a convex combination of Körner graph entropies, we provide a single-letter expression in Theorem 3 for the optimal rate R * ( g ) .
Definition 9
(Körner graph entropy H κ ). For all G = ( V , E , P V ) , let Γ ( G ) be the collection of independent sets of vertices in G. The Körner graph entropy of G is defined by
H κ ( G ) = min V W Γ ( G ) I ( W ; V ) ,
where the minimum is taken over all distributions P W | V Δ ( W ) V , with W = Γ ( G ) and the constraint that the random vertex V belongs to the random set W with probability one.
Below, we recall that the limit of the normalized chromatic entropy of the OR product of graphs admits a closed-form expression given by the Körner graph entropy H κ . Moreover, the Körner graph entropy of OR products of graphs is simply the sum of the individual Körner graph entropies.
Proposition 1
(Properties of H κ ). Theorem 5 in [14] for all probabilistic graphs G and G ,
H κ ( G ) = lim n 1 n H χ ( G n ) ,
H κ ( G G ) = H κ ( G ) + H κ ( G ) .
Definition 10
(Auxiliary graph G z f ). For all z Z , we define the auxiliary graph G z f by
-
X as set of vertices with distribution P X | g ( Y ) = z ;
-
x x are adjacent if f ( x , y ) f ( x , y ) for some y g 1 ( z ) supp P Y | X = x supp P Y | X = x .
Theorem 3
(Pairwise shared side information). If P X , Y and g satisfy (13), the optimal rate is written as follows:
R * ( g ) = z Z P g ( Y ) ( z ) H κ ( G z f ) .
The proof is in Appendix A.2, the keypoint is the particular structure of G [ n ] : a disjointed union of OR products.
Remark 1.
The “pairwise shared side information” assumption (13) implies that the adjacency condition (7) is satisfied, which makes G [ n ] a disjoint union of OR products. Moreover, Körner graph entropies appear in the final expression for R * ( g ) , even if G [ n ] is not an n-th OR power.
Now, consider the case where P X , Y is full support. This is a sufficient condition to have (13). The optimal rate in this setting is derived from Theorem 3, which leads to the analytic expression in Theorem 4.
Theorem 4
(Optimal rate when P X , Y is full support). When P X , Y is full support, the optimal rate is written as follows:
R * ( g ) = H j ( X , g ( Y ) ) | g ( Y ) ,
where the function j returns a word in U * , defined by
j : X × Z U * ( x , z ) f ( x , y ) y g 1 ( z ) .
Proof. 
By Theorem 3, R * ( g ) = z Z P g ( Y ) ( z ) H κ ( G z f ) . It can be shown that G z f is complete multipartite for all z as P X , Y is full support; and it satisfies H κ ( G z f ) = H j ( X , g ( Y ) ) | g ( Y ) = z .    □

3.3. Example

In this example, the “pairwise shared side information” assumption is satisfied and R * ( g ) is strictly less than a conditional Huffman coding of X knowing g ( Y ) ; and also strictly less than the optimal rate without exploiting g ( Y ) at the encoder.
Consider the probability distribution and function outcomes depicted in Figure 2, with U = { a , b , c } , X = { 0 , , 3 } , Y = { 0 , , 7 } , and Z = { 0 , 1 } . Let us show that the “pairwise shared side information” assumption is satisfied. The source symbols 0 , 1 , 2 X share the side information symbol 0 (resp. 5) when g ( Y ) = 0 (resp. g ( Y ) = 1 ). The source symbol 3 X shares the side information symbols 1 , 2 , 3 with the source symbols 0 , 1 , 2 , respectively, when g ( Y ) = 0 , and the source symbol 3 shares the side information symbol 5 with all other source symbols when g ( Y ) = 1 .
Since the “pairwise shared side information” assumption is satisfied, we can use Theorem 3; the optimal rate is written as follows:
R * ( g ) = P g ( Y ) ( 0 ) H κ ( G 0 f ) + P g ( Y ) ( 1 ) H κ ( G 1 f ) .
First, we need to determine the probabilistic graphs G 0 f and G 1 f . In G 0 f , the vertex 0 is adjacent to 2 and 3, as f ( 0 , 0 ) f ( 2 , 0 ) and f ( 0 , 1 ) f ( 3 , 1 ) . The vertex 1 is also adjacent to 2 and 3 as f ( 1 , 0 ) f ( 2 , 0 ) and f ( 1 , 2 ) f ( 3 , 2 ) . Furthermore P X | g ( Y ) = 0 is uniform, hence G 0 f = ( C 4 , Unif ( X ) ) where C 4 is the cycle graph with 4 vertices.
In G 1 f , the vertices 1, 2, 3 are pairwise adjacent as f ( 1 , 5 ) , f ( 2 , 5 ) and f ( 3 , 5 ) are pairwise different; and 0 is adjacent to 1, 2, and 3 because of the different function outputs generated by Y = 4 and Y = 5 . Thus, G 1 f = ( K 4 , P X | g ( Y ) = 1 ) with P X | g ( Y ) = 1 = ( 1 4 , 3 8 , 1 8 , 1 4 ) , and K 4 is the complete graph with 4 vertices.
Now, let us determine H κ ( G 0 f ) and H κ ( G 1 f ) . On the one hand,
H κ ( G 0 f ) = H ( V 0 ) max V 0 W Γ ( G 0 f ) H ( V 0 | W )
= 2 1 = 1 ,
with V 0 P X | g ( Y ) = 0 = Unif ( X ) ; and where H ( V 0 | W ) in (22) is maximized by taking W = { 0 , 1 } when V { 0 , 1 } , and W = { 2 , 3 } otherwise.
On the other hand,
H κ ( G 1 f ) = min V 1 W Γ ( G 1 f ) I ( W ; V 1 )
= H ( V 1 ) 1.906 ,
with V 1 P X | g ( Y ) = 1 ; where (25) follows from Γ ( G 1 f ) = { { 0 } , , { 3 } } , as G 1 f is complete. Hence, R * ( g ) 1.362 .
The rate that we would obtain by transmitting X knowing g ( Y ) at both encoder and decoder with a conditional Huffman algorithm is written as R Huff = H ( X | g ( Y ) ) 1.962 .
The rate that we would obtain without exploiting g ( Y ) at the encoder is R No g = H ( X ) 1.985 because of the different function outputs generated by Y = 4 and Y = 5 .
Finally, H ( f ( X , Y ) | Y ) 0.875 .
In this example, we have
H ( X ) = R No g > R Huff > R * ( g ) > H ( f ( X , Y ) | Y ) .
This illustrates the impact of the side information at the encoder in this setting, as we can observe a large gap between the optimal rate R * ( g ) and R No g .

4. Optimization of the Encoder Side Information

4.1. Preliminary Results on Partitions

In order to optimize the function g in the encoder side information, we propose a new equivalent characterization of the function g in the form of a partition of the set Y . The equivalence is shown in Proposition 2 below.
Proposition 2.
For all g : Y Z , the collection of subsets ( g 1 ( z ) ) z Z is a partition of Y .
Conversely, if A P ( Y ) is a partition of Y , then there exists a mapping g A : Y Z such that z Im g A , A z A , A z = g A 1 ( z ) .
Proof. 
The direct part results directly from the fact that g is a function. For the converse part, we take Z such that | Z | = | A | and we define g A : Y Z by g A ( y ) = z , where z Z is the unique index such that y A z . The property z Im g A , A z A , A z = g A 1 ( z ) is therefore satisfied.    □
Now, let us define coarser and finer partitions, with the corresponding notions of merging and splitting. These operations on partitions are the core idea of our greedy algorithms; as illustrated in Proposition 2, the partitions of Y correspond to functions g : Y Z for the encoder’s side information. Therefore, obtaining a partition from another means finding another function g : Y Z for the encoder’s side information.
Definition 11
(Coarser, Finer). Let A , B P ( Y ) be two partitions of the finite set Y . We say that A is coarser than B if
B B , A A , B A .
If so, we also say that B is finer than A .
Example 1.
Let Y = { 1 , 2 , 3 , 4 } , the partition A = { 1 } , { 2 , 3 , 4 } is coarser than B = { 1 } , { 2 } , { 3 , 4 } .
Definition 12
(Merging, Splitting). A merging is an operation that maps a partition A = { A 1 , , A i , , A j , , A m } to the partition A = { A 1 , , A i A j , , A m } . A splitting in an operation that maps a partition A = { A 1 , , A i , , A m } to the partition A = { A 1 , , A i ( 1 ) , A i ( 2 ) , , A m } , where { A i ( 1 ) , A i ( 2 ) } form a partition of the subset A i .
We also define the set of partitions Merge ( A ) (resp. Split ( A ) ), which correspond to all partitions that can be obtained with a merging (resp. splitting) of A :
Merge ( A ) m ( A ) | m is a merging ;
Split ( A ) s ( A ) | s is a splitting .
Proposition 3.
If A is coarser (resp. finer) than B , then A can be obtained from B by performing a finite number of mergings (resp. splittings).

4.2. Greedy Algorithms Based on Partition Coarsening and Refining

In this Section, we assume P X , Y to be full support.
With Proposition 2, we know that determining the Pareto front by a brute force approach would at least require to enumerate the partitions of Y . Therefore, the complexity of this approach is exponential in | Y | . In the following we describe the greedy Algorithms 1 and 2 that give an achievable set for the encoder’s side information design problem; one of them has a polynomial complexity. Then we give an example where the Pareto front coincides with the boundary of the convex hull of the achievable rate region obtained by both greedy algorithms.
Algorithm 1 Greedy coarsening algorithm
1:
A { 1 } , { | Y | }       // A starts by being the finest partition of Y , i.e., g A = Id .
2:
Front [ A , ndef , , ndef ]       // Will contain the list of the | Y | partitions chosen during the execution
3:
 
4:
for  i { 1 , , | Y | 1 }  do
5:
    // Maximize over B merging of A the slope between H ( g B ( Y ) ) , R * ( g B ) and H ( g A ( Y ) ) , R * ( g A ) .
6:
     A argmax B Merge ( A ) R * ( g B ) R * ( g A ) H ( g B ( Y ) ) H ( g A ( Y ) )
7:
    Front [ i ] A
8:
 
9:
return Front       // A = { 1 , , | Y | } at this point
Algorithm 2 Greedy refining algorithm
1:
A { 1 , , | Y | }       // A starts by being the coarsest partition of Y , i.e., g A = Id .
2:
Front  [ A , ndef , , ndef ]       // Will contain the list of the | Y | partitions chosen during the execution
3:
 
4:
for  i { 1 , , | Y | 1 }  do
5:
    // Minimize over B splitting of A the slope between H ( g A ( Y ) ) , R * ( g A ) and H ( g B ( Y ) ) , R * ( g B ) .
6:
     A argmin B Split ( A ) R * ( g B ) R * ( g A ) H ( g B ( Y ) ) H ( g A ( Y ) )
7:
    Front [ i ] A
8:
 
9:
return Front      // A = { 1 } , { | Y | } at this point
In these a argmin (resp. argmax) means any minimizer (resp. maximizer) of the specified quantity; and the function g A : Y Z is a function for the encoder’s side information corresponding to the partition A , whose existence is given by Proposition 2.
The coarsening (resp. refining) algorithm starts by computing its first achievable point H ( g A ( Y ) ) , R * ( g A ) with A being the finest (resp. coarsest) partition: it evaluates R * ( g A ) , with g A = Id (resp. g A constant); and H ( g A ( Y ) ) = H ( Y ) (resp. H ( g A ( Y ) ) = 0 ). Then, at each iteration, the next achievable point will be computed by using a merging (resp. splitting) of the current partition A . The next partition will be a coarser (resp. finer) partition chosen from Merge ( A ) (resp. Split ( A ) ), following a greedy approach. Since we want to achieve good trade-offs between H ( g A ( Y ) ) and R * ( g A ) , we want to decrease H ( g ( Y ) ) (resp. R * ( g A ) ) as much as possible while increasing the other quantity as less as possible. We do so by maximizing over B Merge ( A ) the negative ratio
R * ( g B ) R * ( g A ) H ( g B ( Y ) ) H ( g A ( Y ) ) ,
resp. minimizing over B Split ( A ) the negative ratio
R * ( g B ) R * ( g A ) H ( g B ( Y ) ) H ( g A ( Y ) ) ;
hence the use of slope maximization (resp. minimization) in the algorithm. At the end, the set of achievable points computed by the algorithm is returned.
In Figure 3, we show rate pairs associated with all possible partitions of Y : a point corresponds to a partition of Y , its position gives the associated rates H ( g ( Y ) ) , R * ( g ) . Two points are linked if their corresponding partitions A , B satisfy A Merge ( B ) or A Split ( B ) . The obtained graph is the Hasse diagram for the partial order “coarser than”. Note that due to symmetries in the chosen example, several points associated with different partitions may overlap. In Figure 4, (resp. Figure 5), we give an illustration of the trajectory of the greedy coarsening (resp. refining) algorithm.
Figure 3, Figure 4 and Figure 5 are obtained with the following problem data:
P X , Y = Unif ( X × Y ) f ( · , · ) = 0 0 0 1 0 0 1 1 1 1 0 0 1 1 1 1 .
As stated in Theorem 5, the complexity of the coarsening greedy algorithm is polynomial since | Merge ( A ) | is quadratic in | Y | and the evaluation of R * ( g ) can be conducted in polynomial time. This polynomial complexity property is not satisfied by the refining greedy algorithm, as | Split ( A ) | is exponential in | Y | .
Theorem 5.
The coarsening greedy algorithm runs in polynomial time in | Y | . The refining greedy algorithm runs in exponential time in | Y | .
Proof. 
The number of points evaluated by the coarsening (resp. refining) greedy algorithm is O ( | Y | 3 ) (resp. O ( 2 | Y | ) ): O ( | Y | ) mergings (resp. splittings) are made; and for each of these mergings, all points from Merge ( A ) (resp. Split ( A ) ) are evaluated; they are, at most, ( | Y | 2 ) = O ( | Y | 2 ) (resp. O ( 2 | Y | ) , in the worst case A = { 1 , , | Y | } ). Since the expression R * ( g ) = H j ( X , g ( Y ) ) | g ( Y ) from Theorem 4 allows for an evaluation of R * ( g ) in polynomial time in | Y | , the coarsening (resp. refining) greedy algorithm has a polynomial (resp. exponential) time complexity.    □

Author Contributions

Formal analysis, N.C.; validation, N.C., M.L.T. and A.R.; writing—original draft preparation, N.C.; writing—review and editing, M.L.T. and A.R.; supervision, M.L.T. and A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Cominlabs excellence laboratory with the French National Research Agency’s funding (ANR-10-LABX-07-01).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. Proof of Theorem 2

Consider the particular case f ( X , Y ) = X of Figure 1. The optimal rate in this particular case equals the optimal rate R * ( g ) in the following auxiliary problem, depicted in Figure A1: ( X , g ( Y ) ) as source available at the encoder and to be retrieved by the decoder which knows Y (thus, expecting it to retrieve g ( Y ) in addition to X does not change the optimal rate).
Figure A1. An auxiliary zero-error Slepian–Wolf problem.
Figure A1. An auxiliary zero-error Slepian–Wolf problem.
Entropy 26 00338 g0a1
Definition A1
(Characteristic graph for the zero-error Slepian–Wolf problem). Let X , Y be two finite sets and P Y | X be a conditional distribution from Δ ( Y ) | X | . The characteristic graph associated with P Y | X is defined by the following:
-
X as set of vertices;
-
x , x X are adjacent if P Y | X ( y | x ) P Y | X ( y | x ) > 0 for some y Y .
This auxiliary problem is a particular instance of the zero-error Slepian–Wolf problem; its optimal rate is written as H ¯ ( G ) , where H ¯ ( G ) is the complementary graph entropy [17] and G is the characteristic graph in the Slepian–Wolf problem, defined in Definition A1, for the pair ( X , g ( Y ) ) , Y . The graph G has X × Z as a set of vertices, and ( x , z ) is adjacent to ( x , z ) if there exists a side information symbol y Y such that P X , Y , g ( Y ) ( x , y , z ) P X , Y , g ( Y ) ( x , y , z ) > 0 . It can be observed that the vertices ( x , z ) and ( x , z ) such that z z are not adjacent in G. The graph G is therefore a disjoint union indexed by Z :
G = z Z P g ( Y ) G z ;
R * ( g ) = H ¯ ( G ) = H ¯ z Z P g ( Y ) G z ;
where for all z Z , G z is the characteristic graph defined in Definition A1 for the pair ( X z , Y z ) P X , Y | g ( Y ) = z .
( ) Assume that g and P X , Y satisfy the “pairwise shared side information” condition. It directly follows that P X | g ( Y ) = z is full support for all z Z . Let z Z , and let ( x , z ) , ( x , z ) be any two vertices of G z . By construction, there exists y g 1 ( z ) such that P X , Y ( x , y ) P X , Y ( x , y ) > 0 ; hence, P X , Y , g ( Y ) ( x , y , z ) P X , Y , g ( Y ) ( x , y , z ) > 0 , and ( x , z ) , ( x , z ) are adjacent in G z . Each graph G z is therefore complete and perfect; the graph G = z Z P g ( Y ) G z is a disjoint union of perfect graphs and is therefore also perfect. We have the following:
R * ( g ) = H ¯ z Z P g ( Y ) G z
= H κ z Z P g ( Y ) G z
= z Z P g ( Y ) ( z ) H κ ( G z )
= z Z P g ( Y ) ( z ) H ( P X | g ( Y ) = z )
= H ( X | g ( Y ) ) ;
where (A3) comes from (A2); (A4) and (A5) follow from Corollary 12 in [18] used on the perfect graph z Z P g ( Y ) G z ; and (A6) holds as the independent subsets of the complete graph G z are singletons containing one of its vertices.
( ) Conversely, assume that P X | g ( Y ) = z is full support for all z Z , and R * ( g ) = H ( X | g ( Y ) ) .
Assume, ad absurdum, that at least one of the ( G z ) z Z is not complete; then, there exists a coloring of that graph that maps two different vertices to the same color. Thus, there exists z Z such that
H ¯ ( G z ) < H ( P X | g ( Y ) = z ) ,
as P X | g ( Y ) = z is full support. We have
H ( X | g ( Y ) ) = R * ( g )
= H ¯ z Z P g ( Y ) G z
z Z P g ( Y ) ( z ) H ¯ ( G z )
< H ( X | g ( Y ) ) ;
where (A10) comes from (A2), (A11) results from Theorem 2 in [17], and (A12) follows from (A8). We arrive at a contradiction, and hence all the graphs ( G z ) z Z are complete: for all z Z and x , x X , there exists a side information symbol y Y such that P X , Y , g ( Y ) ( x , y , z ) P X , Y , g ( Y ) ( x , y , z ) > 0 ; hence, y g 1 ( z ) and satisfies P X , Y ( x , y ) P X , Y ( x , y ) > 0 . The condition “pairwise shared side information” is satisfied by P X , Y , g .

Appendix A.2. Proof of Theorem 3

Let us specify the adjacency condition in G [ n ] under assumption (13). Two vertices are adjacent if they satisfy (7) and (8); however, (7) is always satisfied under (13). Thus, ( x n , z n ) ( x n , z n ) are adjacent if z n = z n and
y n g 1 ( z n ) , t n , f ( x t , y t ) f ( x t , y t ) .
It can be observed that condition (A13) is the adjacency condition of an OR product of adequate graphs; more precisely,
G [ n ] = z n Z n t n G z t f .
Although G [ n ] cannot be expressed as an n-th OR power, we will show that its chromatic entropy asymptotically coincides with that of an appropriate OR power: we now search for an asymptotic equivalent of H χ ( G [ n ] ) .
Definition A2.
S n is the set of colorings of G [ n ] that can be written as ( x n , z n ) ( T z n , c ˜ ( x n , z n ) ) for some mapping c ˜ : X n × Z n C ˜ ; where T z n denotes the type of z n .
In the following, we define Z n g ( Y t ) t n . Now, we need several Lemmas. Lemma A1 states that the optimal coloring c ( x n , z n ) of G [ n ] has the type of z n as a prefix at a negligible rate cost. Lemma A3 gives an asymptotic formula for the minimal entropy of the colorings from S n .
Lemma A1.
The following asymptotic comparison holds as follows:
H χ ( G [ n ] ) = inf c coloring of G [ n ] s . t . c S n H ( c ( X n , Z n ) ) + O ( log n ) .
Definition A3
(Isomorphic probabilistic graphs). Let G 1 = ( V 1 , E 1 , P V 1 ) and G 2 = ( V 2 , E 2 , P V 2 ) be two probabilistic graphs. We say that G 1 is isomorphic to G 2 (denoted by G 1 G 2 ) if there exists an isomorphism between them, i.e., a bijection ψ : V 1 V 2 such that
-
For all v 1 , v 1 V 1 , v 1 v 1 E 1 ψ ( v 1 ) ψ ( v 1 ) E 2 ;
-
For all v 1 V 1 , P V 1 ( v 1 ) = P V 2 ψ ( v 1 ) .
Lemma A2.
Let B be a finite set, let P B Δ ( B ) and let ( G b ) b B be a family of isomorphic probabilistic graphs, then H χ b B P B G b = H χ ( G b ) for all b B .
Lemma A3.
The following asymptotic comparison holds as follows:
inf c coloring of G [ n ] s . t . c S n H ( c ( X n , Z n ) ) = n z Z P g ( Y ) ( z ) H κ ( G z f ) + o ( n ) .
The proof of Lemma A1 is given in Appendix A.3 and its keypoint is the asymptotically negligible entropy of the prefix T Z n of the colorings of S n . The proof of Lemma A2 is given in Appendix A.5. The proof of Lemma A3 is given in Appendix A.4 and relies on the decomposition G [ n ] = Q n Δ n ( Z ) G [ n ] Q n , where G [ n ] Q n is the subgraph induced by the vertices ( x n , z n ) such that the type of z n is Q n . We show that G [ n ] Q n is a disjoint union of isomorphic graphs whose chromatic entropy is given by Lemma A2 and (17): | H χ ( G [ n ] Q n ) n z Z Q n ( z ) H κ ( G z f ) | n ϵ n . Finally, uniform convergence arguments enable us draw a conclusion.
Now, let us combine these results together as follows:
R * ( g ) = 1 n H χ ( G [ n ] ) + o ( 1 )
= 1 n inf c coloring of G [ n ] s . t . c S n H ( c ( X n , Z n ) ) + o ( 1 )
= z Z P g ( Y ) ( z ) H κ ( G z f ) + o ( 1 ) ,
where (A17) comes from Theorem 1, (A18) comes from Lemma A1, and (A19) comes from Lemma A3. The proof of Theorem 3 is complete.

Appendix A.3. Proof of Lemma A1

Let c n * be the coloring of G [ n ] with minimal entropy. Then, we have the following:
H χ ( G [ n ] ) = inf c coloring of G [ n ] H ( c ( X n , Z n ) )
inf c coloring of G [ n ] s . t . c S n H ( c ( X n , Z n ) )
= inf c : ( x n , z n ) ( T z n , c ˜ ( x n , z n ) ) H ( T Z n , c ˜ ( X n , Z n ) )
H ( T Z n ) + H ( c n * ( X n , Z n ) )
= H χ ( G [ n ] ) + O ( log n ) ,
where (A22) comes from Definition A2; (A23) comes from the subadditivity of the entropy, ( x n , z n ) ( T z n , c n * ( x n , z n ) ) is a coloring of G [ n ] that belongs to S n , and (A24) comes from H ( T Z n ) = O ( log n ) , as log | Δ n ( Z ) | = O ( log n ) . The desired equality comes from the bounds H χ ( G [ n ] ) and H χ ( G [ n ] ) + O ( log n ) on (A21).

Appendix A.4. Proof of Lemma A3

For all Q n Δ n ( Z ) , let
G [ n ] Q n = z n Z n T z n = Q n t n G z t f ,
with the probability distribution induced by P X , Z n . This graph is formed of the connected components of G [ n ] whose corresponding z n has type Q n . We need to find an equivalent for H χ ( G [ n ] Q n ) . Since G [ n ] Q n is a disjoint union of isomorphic graphs, we can use Lemma A2 as follows:
H χ ( G [ n ] Q n ) = H χ z Z ( G z f ) n Q n ( z ) .
On one hand,
H χ z Z ( G z f ) n Q n ( z ) H κ z Z ( G z f ) n Q n ( z )
= n z Z Q n ( z ) H κ ( G z f ) ,
where (A27) comes from H κ H χ Lemma 14 in [14], (A28) comes from (17). On the other hand,
H χ z Z ( G z f ) n Q n ( z ) z Z Q n ( z ) H χ ( ( G z f ) n )
= n z Z Q n ( z ) H κ ( G z f ) + n ϵ n ,
where ϵ n max z 1 n H χ ( ( G z f ) n ) H κ ( G z f ) is a quantity that does not depend on Q n and satisfies lim n ϵ n = 0 ; (A29) comes from the subadditivity of H χ . Combining Equations (A26), (A28), and (A30) yields
H χ ( G [ n ] Q n ) n z Z Q n ( z ) H κ ( G z f ) n ϵ n .
Now, we have an equivalent for H χ ( G [ n ] Q n ) .
inf c coloring of G [ n ] s . t . c S n H ( c ( X n , Z n ) )
= inf c : ( x n , z n ) ( T z n , c ˜ ( x n , z n ) ) H ( c ˜ ( X n , Z n ) | T Z n ) + H ( T Z n )
= inf c : ( x n , z n ) ( T z n , c ˜ ( x n , z n ) ) Q n Δ n ( Z ) P T Z n ( Q n ) H ( c ˜ ( X n , Z n ) | T Z n = Q n ) + O ( log n )
= Q n Δ n ( Z ) P T Z n ( Q n ) inf c Q n coloring of G [ n ] Q n H ( c Q n ( X n , Z n ) | T Z n = Q n ) + O ( log n )
= Q n Δ n ( Z ) P T Z n ( Q n ) H χ ( G [ n ] Q n ) + O ( log n )
= Q n Δ n ( Z ) P T Z n ( Q n ) n z Z Q n ( z ) H κ ( G z f ) ± n ϵ n + O ( log n )
= n Q n Δ n ( Z ) 2 n D ( Q n P g ( Y ) ) + o ( n ) z Z Q n ( z ) H κ ( G z f ) ± n ϵ n + O ( log n )
= n z Z P g ( Y ) ( z ) H κ ( G z f ) + o ( n ) ,
where (A34) comes from H ( T Z n ) = O ( log n ) , as log | Δ n ( Z ) | = O ( log n ) ; (A35) follows from the fact that the entropy of c ˜ can be minimized independently on each G [ n ] Q n ; (A36) follows from the definition of G [ n ] Q n ; (A37) comes from (A31); (A38) comes from Lemma 2.6 in [19] and the fact that ϵ n does not depend on Q n .

Appendix A.5. Proof of Lemma A2

Let ( G ˜ i ) i N be isomorphic probabilistic graphs and G such that G = i G ˜ i . Let c 1 * : V 1 C be the coloring of G ˜ 1 with minimal entropy, and let c * be the coloring of G defined by
c * : V C
v s . c 1 * ψ i v 1 ( v ) ,
where i v is the unique integer such that v V i v and ψ i v 1 : V i v V 1 is an isomorphism between G ˜ i v and G ˜ 1 . In other words, c * applies the same coloring pattern c 1 * on each connected component of G. We have
H χ ( G ) H ( c * ( V ) )
= h j N P i V ( j ) P c * ( V j )
= h j N P i V ( j ) P c 1 * ( V 1 )
= H ( c 1 * ( V 1 ) )
= H χ ( G ˜ 1 ) ,
where h denotes the entropy of a distribution; (A44) comes from the definition of c * ; and (A46) comes from the definition of c 1 * .
Now, let us prove the upper bound on H χ ( G ˜ 1 ) . Let c be a coloring of G, and let i * argmin i H ( c ( V i ) ) (i.e., i * is the index of the connected component for which the entropy of the coloring induced by c is minimal). We have
H ( c ( V ) ) = h j N P i V ( j ) P c ( V j )
j N P i V ( j ) h ( P c ( V j ) )
j N P i V ( j ) H ( c ( V i * ) )
H χ ( G ˜ i * ) ,
= H χ ( G ˜ 1 ) ,
where (A48) follows from the concavity of h; (A49) follows from the definition of i * ; (A50) comes from the fact that c induces a coloring of G ˜ i * ; (A51) comes from the fact that G ˜ 1 and G ˜ i * are isomorphic. Now, we can combine the bounds (A46) and (A51): for all coloring c of G we have
H χ ( G ) H χ ( G ˜ 1 ) H ( c ( V ) ) ,
which yields the desired equality when taking the infimum over c.

References

  1. Orlitsky, A.; Roche, J.R. Coding for computing. In Proceedings of the IEEE 36th Annual Foundations of Computer Science, Milwaukee, WI, USA, 23–25 October 1995; IEEE: Piscataway, NJ, USA, 1995; pp. 502–511. [Google Scholar]
  2. Duan, L.; Liu, J.; Yang, W.; Huang, T.; Gao, W. Video coding for machines: A paradigm of collaborative compression and intelligent analytics. IEEE Trans. Image Process. 2020, 29, 8680–8695. [Google Scholar] [CrossRef] [PubMed]
  3. Gao, W.; Liu, S.; Xu, X.; Rafie, M.; Zhang, Y.; Curcio, I. Recent standard development activities on video coding for machines. arXiv 2021, arXiv:2105.12653. [Google Scholar]
  4. Yamamoto, H. Wyner-ziv theory for a general function of the correlated sources (corresp.). IEEE Trans. Inf. Theory 1982, 28, 803–807. [Google Scholar] [CrossRef]
  5. Shayevitz, O. Distributed computing and the graph entropy region. IEEE Trans. Inf. Theory 2014, 60, 3435–3449. [Google Scholar] [CrossRef]
  6. Krithivasan, D.; Pradhan, S.S. Distributed source coding using abelian group codes: A new achievable rate-distortion region. IEEE Trans. Inf. Theory 2011, 57, 1495–1519. [Google Scholar] [CrossRef]
  7. Basu, S.; Seo, D.; Varshney, L.R. Hypergraph-based Coding Schemes for Two Source Coding Problems under Maximal Distortion. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020. [Google Scholar]
  8. Malak, D.; Médard, M. Hyper Binning for Distributed Function Coding. In Proceedings of the 2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Atlanta, GA, USA, 26–29 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–5. [Google Scholar]
  9. Feizi, S.; Médard, M. On network functional compression. IEEE Trans. Inf. Theory 2014, 60, 5387–5401. [Google Scholar] [CrossRef]
  10. Sefidgaran, M.; Tchamkerten, A. Distributed function computation over a rooted directed tree. IEEE Trans. Inf. Theory 2016, 62, 7135–7152. [Google Scholar] [CrossRef]
  11. Ravi, J.; Dey, B.K. Function Computation Through a Bidirectional Relay. IEEE Trans. Inf. Theory 2018, 65, 902–916. [Google Scholar] [CrossRef]
  12. Guang, X.; Yeung, R.W.; Yang, S.; Li, C. Improved upper bound on the network function computing capacity. IEEE Trans. Inf. Theory 2019, 65, 3790–3811. [Google Scholar] [CrossRef]
  13. Charpenay, N.; Le Treust, M.; Roumy, A. Optimal Zero-Error Coding for Computing under Pairwise Shared Side Information. In Proceedings of the 2023 IEEE Information Theory Workshop (ITW), Saint-Malo, France, 23–28 April 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 97–101. [Google Scholar]
  14. Alon, N.; Orlitsky, A. Source coding and graph entropies. IEEE Trans. Inf. Theory 1996, 42, 1329–1339. [Google Scholar] [CrossRef]
  15. Witsenhausen, H. The zero-error side information problem and chromatic numbers (corresp.). IEEE Trans. Inf. Theory 1976, 22, 592–593. [Google Scholar] [CrossRef]
  16. Körner, J. Coding of an information source having ambiguous alphabet and the entropy of graphs. In Proceedings of the 6th Prague Conference on Information Theory, Prague, Czech Republic, 19–25 September 1973; pp. 411–425. [Google Scholar]
  17. Tuncel, E.; Nayak, J.; Koulgi, P.; Rose, K. On complementary graph entropy. IEEE Trans. Inf. Theory 2009, 55, 2537–2546. [Google Scholar] [CrossRef]
  18. Csiszár, I.; Körner, J.; Lovász, L.; Marton, K.; Simonyi, G. Entropy splitting for antiblocking corners and perfect graphs. Combinatorica 1990, 10, 27–40. [Google Scholar] [CrossRef]
  19. Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
Figure 1. Zero-error coding for computing with side information at the encoder.
Figure 1. Zero-error coding for computing with side information at the encoder.
Entropy 26 00338 g001
Figure 2. An example of P X , Y and g that satisfies (13), along with the outcomes f ( X , Y ) . The elements outside supp P X , Y are denoted by *.
Figure 2. An example of P X , Y and g that satisfies (13), along with the outcomes f ( X , Y ) . The elements outside supp P X , Y are denoted by *.
Entropy 26 00338 g002
Figure 3. An illustration of the rate pairs associated with all partitions of Y . The Pareto front is the broken line corresponding to the partitions p 1 p 2 p 3 p 4 p 5 ; with p 1 = { 1 , 2 , 3 , 4 } , p 2 = { 1 , 2 , 4 } , { 3 } , p 3 = { 1 , 2 } , { 3 , 4 } , p 4 = { 1 , 2 } , { 3 } , { 4 } , p 5 = { 1 } , { 2 } , { 3 } , { 4 } .
Figure 3. An illustration of the rate pairs associated with all partitions of Y . The Pareto front is the broken line corresponding to the partitions p 1 p 2 p 3 p 4 p 5 ; with p 1 = { 1 , 2 , 3 , 4 } , p 2 = { 1 , 2 , 4 } , { 3 } , p 3 = { 1 , 2 } , { 3 , 4 } , p 4 = { 1 , 2 } , { 3 } , { 4 } , p 5 = { 1 } , { 2 } , { 3 } , { 4 } .
Entropy 26 00338 g003
Figure 4. An illustration of the trajectory of the coarsening greedy algorithm (blue), with the Pareto front of the achievable rates (dashed red).
Figure 4. An illustration of the trajectory of the coarsening greedy algorithm (blue), with the Pareto front of the achievable rates (dashed red).
Entropy 26 00338 g004
Figure 5. An illustration of the trajectory of the refining greedy algorithm (green), with the Pareto front of the achievable rates (dashed red).
Figure 5. An illustration of the trajectory of the refining greedy algorithm (green), with the Pareto front of the achievable rates (dashed red).
Entropy 26 00338 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Charpenay, N.; Le Treust, M.; Roumy, A. Side Information Design in Zero-Error Coding for Computing. Entropy 2024, 26, 338. https://doi.org/10.3390/e26040338

AMA Style

Charpenay N, Le Treust M, Roumy A. Side Information Design in Zero-Error Coding for Computing. Entropy. 2024; 26(4):338. https://doi.org/10.3390/e26040338

Chicago/Turabian Style

Charpenay, Nicolas, Maël Le Treust, and Aline Roumy. 2024. "Side Information Design in Zero-Error Coding for Computing" Entropy 26, no. 4: 338. https://doi.org/10.3390/e26040338

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop