Next Article in Journal
Impact of Combined Modulation of Two Potassium Ion Currents on Spiral Waves and Turbulent States in the Heart
Previous Article in Journal
Quantum Authentication Evolution: Novel Approaches for Securing Quantum Key Distribution
Previous Article in Special Issue
A Joint Communication and Computation Design for Probabilistic Semantic Communications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Server Multi-Function Distributed Computation

Communication Systems Department, EURECOM, Sophia Antipolis, 06140 Biot, France
*
Author to whom correspondence should be addressed.
This work was conducted when B. Serbetci was a Postdoctoral Researcher at EURECOM.
Entropy 2024, 26(6), 448; https://doi.org/10.3390/e26060448
Submission received: 31 March 2024 / Revised: 14 May 2024 / Accepted: 23 May 2024 / Published: 26 May 2024

Abstract

:
The work here studies the communication cost for a multi-server multi-task distributed computation framework, as well as for a broad class of functions and data statistics. Considering the framework where a user seeks the computation of multiple complex (conceivably non-linear) tasks from a set of distributed servers, we establish the communication cost upper bounds for a variety of data statistics, function classes, and data placements across the servers. To do so, we proceed to apply, for the first time here, Körner’s characteristic graph approach—which is known to capture the structural properties of data and functions—to the promising framework of multi-server multi-task distributed computing. Going beyond the general expressions, and in order to offer clearer insight, we also consider the well-known scenario of cyclic dataset placement and linearly separable functions over the binary field, in which case, our approach exhibits considerable gains over the state of the art. Similar gains are identified for the case of multi-linear functions.

1. Introduction

Distributed computing plays an increasingly significant role in accelerating the execution of computationally challenging and complex computational tasks. This growth in influence is rooted in the innate capability of distributed computing to parallelize computational loads across multiple servers. This same parallelization renders distributed computing as an indispensable tool for addressing a wide array of complex computational challenges, spanning scientific simulations, and extracting various spatial data distributions [1], data-intensive analyses for cloud computing [2], and machine learning [3], as well as applications in various other fields such as computational fluid dynamics [4], high-quality graphics for movie and game rendering [5], and a variety of medical applications [6], to name just a few. In the center of this ever-increasing presence of parallelized computing stand modern parallel processing techniques, such as MapReduce [7,8,9] and Spark [10,11].
However, for distributed computing to achieve the desirable parallelization effect, there is an undeniable need for massive information exchange to and from the various network nodes. Reducing this communication load is essential for scalability [12,13,14,15] in various topologies [16,17,18]. Central to the effort to reduce communication costs stand coding techniques such as those found in [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36], including gradient coding [21] and different variants of coded distributed computing that nicely yield gains in reliability, scalability, computation speed, and cost-effectiveness [24]. Similar communication-load aspects are often addressed via polynomial codes [37], which can mitigate stragglers and enhance the recovery threshold, while MatDot codes, devised in [31,38] for secure distributed matrix multiplication, can decrease the number of transmissions for distributed matrix multiplication. This same emphasis on reducing communication costs is even more prominent in works like [31,34,35,38,39,40,41,42,43,44,45,46], which again, focus on distributed matrix multiplication. For example, focusing on a cyclic dataset placement model, the work in [39] provided useful achievability results, while the authors of [35] have characterized achievability and converse bounds for secure distributed matrix multiplication. Furthermore, the work in [34] found creative methods to exploit the correlation between the entries of the matrix product in order to reduce the cost of communication.

1.1. The Multi-Server Multi-Function Distributed Computing Setting and the Need for Accounting for General Non-Linear Functions

As computing requirements become increasingly challenging, distributed computing models have also evolved to be increasingly complex. One such recent model is the multi-server multi-function distributed computing model that consists of a master node, a set of distributed servers, and a user demanding the computation of multiple functions. The master contains the set of all datasets and allocates them to the servers, which are then responsible for computing a set of specific subfunctions for the datasets. This multi-server multi-function setting was recently studied by Wan et al. in [39] for the class of linearly separable functions, which nicely captures a wide range of real-world tasks [7] such as convolution [41], the discrete Fourier transform [47], and a variety of other cases as well. This same work bounded the communication cost, employing linear encoding and linear decoding that leverage the structure of requests.
At the same time, however, there is growing need to consider more general classes of functions, including non-linear functions, such as is often the case with subfunctions that produce intermediate values in MapReduce operations [7] or that relate to quantization [48], classification [49], and optimization [50]. Intense interest can also be identified in the aforementioned problem of distributed matrix multiplication, which has been explored in a plethora of works, which include [35,42,45,51,52,53], with a diverse focus that entails secrecy [45,51,53], as well as precision and stragglers [14,35,42,52], to name a few. In addition to matrix multiplication, other important non-linear function classes include sparse polynomial multiplication [54], permutation invariant functions [55]—which often appear in multi-agent settings and have applications in learning, combinatorics, and graph neural networks—as well as nomographic functions [56,57], which can appear in the context of sensor networks and which have strong connections with interference exploitation and lattice codes, as nicely revealed in [56,57].
Our own work here is indeed motivated by this emerging need for distributed computing of non-linear functions, and our goal is to now consider general functions in the context of the multi-server multi-function distributed computing framework while also capturing dataset statistics and correlations and while exploiting the structural properties of the (possibly non-linear) functions requested by the user. For this purpose, we go beyond the linear coding approaches in [39,58,59] and devise demand-based encoding–decoding solutions. Furthermore, we adopt—in the context of the multi-server multi-function framework—the powerful tools from characteristic graphs that are specifically geared toward capturing both the statistical structure of the data as well as the properties of functions beyond the linear case. To help the reader better understand our motivation and contribution, we proceed with a brief discussion on data structure and characteristic graphs.

1.2. Data Correlation and Structure

Crucial in reducing the communication bottleneck of distributed computing is an ability to capture the structure that appears in modern datasets. Indeed, even before computing considerations come into play, capturing the general structure of the data has been crucial in reducing the communication load in various scenarios such as those in the seminal work by Slepian–Wolf [60] and Cover [61]. Similarly, when function computation is introduced, data structure can be a key component. In the context of computing, we have seen the seminal work by Körner and Marton [62], which focused on efficient compression of the modulo 2 sum of two statistically dependent sources, while Lalitha et al. [63] explored linear combinations of multiple statistically dependent sources. Furthermore, for general bivariate functions of correlated sources, when one of the sources is available as side information, the work of Yamamoto [64] generalized the pioneering work of Wyner and Ziv [65] to provide a rate-distortion characterization for the function computation setting.
It is the case, however, that when the computational model becomes more involved—as is the case in our multi-server multi-function scenario here—the data may often be treated as unstructured and independent [39,58,66,67,68]. This naturally allows for crucial analytical tractability, but it may often ignore the potential benefits of accounting for statistical skews and correlations in data when aiming to reduce communication costs in distributed computing. Furthermore, this comes at a time when more and more function computation settings—such as in medical imaging analysis [69], data fusion, and group inferences [70], as well as predictive modeling for artificial intelligence [71]—entail datasets with prominent dependencies and correlations. While various works, such as those by Körner–Marton [62], Han–Kobayashi [72], Yamamoto [64], Alon–Orlitsky [73], and Orlitsky–Roche [74], provide crucial breakthroughs in exploiting data structure, to the best of our knowledge, in the context of fully distributed function computation, the structure in functions and data has yet to be considered simultaneously.

1.3. Characteristic Graphs

To jointly account for this structure in both data and functions, we draw from the powerful literature on characteristic graphs, introduced by Körner for source coding [75] and used in data compression [62,73,74,76,77,78], cryptography [79], image processing [80], and bioinformatics [81]. For example, toward understanding the fundamental limits of distributed functional compression, the work in [75] devised the graph entropy approach in order to provide the best possible encoding rate of an information source with vanishing error probability. This same approach, while capturing both function structure and source structure, was presented for the case of one source, and it is not directly applicable to the distributed computing setting. Similarly, the zero-error side information setting in [73] and the lossy encoding setting in [64,74] use Körner’s graph entropy [75] approach to capture both function structure and source structure but were again presented for the case of one source. A similar focus can be found in the works in [73,74,76,77,79]. The same characteristic graph approach nicely used by Feizi and Médard in [82] for the distributed computing setting, for a simple distributed computing framework, and in the absence of considerations for the data structure.
Characteristic graphs, which are used in fully distributed architectures to compress information, can allow us to capture various data statistics and correlations, various data placement arrangements, and various function types. This versatility motivates us to employ characteristic graphs in our multi-server multi-function architecture for distributed computing of non-linear functions.

1.4. Contributions

In this paper, leveraging fundamental principles from source and functional compression, as well as graph theory, we study a general multi-server multi-function distributed computing framework composed of a single user requesting a set of functions, which are computed with the assistance of distributed servers that have partial access to the datasets. To achieve our goal, we consider the use of Körner’s characteristic graph framework [75] in our multi-server multi-function setting and proceed to establish upper bounds on the achievable sum-rates reflecting the setting’s communication requirements.
By extending, for the first time here, Körner’s characteristic graph framework [75] to the new multi-server multi-function setting, we are able to reflect the nature of the functions and data statistics in order to allow each server to build a codebook of encoding functions that determine the transmitted information. Each server, using its own codebook, can transmit a function (or a set of functions) of the subfunctions of the data available in its storage and to then provide the user with sufficient information for evaluating the demanded functions. The codebooks allow for a substantial reduction in the communication load.
The employed approach allows us to account for general dataset statistics, correlations, dataset placement, and function classes, thus yielding gains over the state of the art [39,60], as showcased in our examples for the case of linearly separable functions in the presence of statistically skewed data, as well as for the case of multi-linear functions where the gains are particularly prominent, again under statistically skewed data. For this last case of multi-linear functions, we provide an upper bound on the achievable sum-rate (see Section 4.2) under a cyclic placement of the data that reside in the binary field. We also provide a generalization of some elements in the existing works on linearly separable functions [39,58].
In the end, our work demonstrates the power of using characteristic-graph-based encoding for exploiting the structural properties of functions and data in distributed computing, as well as provides insights into fundamental compression limits, all for the broad scenario of multi-server multi-function distributed computation.

1.5. Paper Organization

The rest of this paper is structured as follows. Section 2 describes the system model for the multi-server multi-function architecture, and Section 3 details the main results on the communication cost or sum-rate bounds under the general dataset distributions and correlations, dataset placement models, and general function classes requested by the user over a field of characteristic q 2 , through employing the characteristic graph approach, and contrasts the sum-rate with the relevant prior works, e.g., [39,60]. Finally, we summarize our key results and outline possible future directions in Section 5. We provide a primer for the key definitions and results on characteristic graphs and their fundamental compression limits in Appendix A and give proofs of our main results in Appendix B.
Notation: We denote by H ( X ) = E [ log P X ( X ) ] the Shannon entropy of random variable X drawn from distribution or probability mass function (PMF) P X . Let P X 1 , X 2 be the joint PMF of two random variables X 1 and X 2 , where X 1 and X 2 are not necessarily independent and identically distributed (i.i.d.), i.e., equivalently, the joint PMF is not in product form. The notation X Bern ( ϵ ) denotes that X is Bernoulli distributed with parameter ϵ [ 0 , 1 ] . Let h ( · ) denote the binary entropy function and H B ( B ( n , ϵ ) ) denote the entropy of a binomial random variable of size n N , with ϵ [ 0 , 1 ] modeling the success probability of each Boolean-valued outcome. The notation X S = { X i : i S } denotes a subset of servers with indices i S for S Ω . The notation S c = Ω S denotes the complement of S . We denote the probability of an event A by P ( A ) . The notation 1 x A denotes the indicator function, which takes the value 1 if x A and 0 otherwise. The notation G X i denotes the characteristic graph that server i Ω builds for computing F ( X Ω ) . The measures H G X ( X ) and H G X ( X | Y ) denote the entropy of characteristic graph G X and the conditional graph entropy for random variable X given Y, respectively. The notation T ( N , K , K c , M , N r ) shows the topology of the distributed system. We note that Z i denotes the indices of datasets stored in i Ω , and the notation K n ( S ) = | Z S | = | i S Z i | represents the cardinality of the datasets in the union of the sets in S for a given subset S Ω of servers. We also note that [ N ] = { 1 , 2 , , N } , N Z + , and [ a : b ] = { a , a + 1 , , b } for a , b Z + such that a < b . We use the convention mod { b , a } = a if a divides b. We provide the notation in Table 1.

2. System Model

This section outlines our multi-server multi-function architecture and details our main technical contributions, namely, the communication cost for the problem of distributed computing of general non-linear functions and the cost for special instances of the computation problem under some simplifying assumptions on the dataset statistics, dataset correlations, placement, and the structures of functions.
In the multi-server multi-function distributed computation framework, the master has access to the set of all datasets and distributes the datasets across the servers. The total number of servers is N, and each server has a capacity of M. Communication from the master to the servers is allowed, whereas the servers are distributed and cannot collaborate. The user requests K c functions that could be non-linear. Given the dataset assignment to the servers, any subset of N r servers is sufficient to compute the functions requested. We denote by T ( N , K , K c , M , N r ) the topology for the described multi-server multi-function distributed computing setting, which we detail in the following.

2.1. Datasets, Subfunctions, and Placement into Distributed Servers

There are K datasets in total, each denoted by D k , k [ K ] . Each distributed server i Ω = [ N ] with a capacity of M is assigned a subset of datasets with indices Z i [ K ] such that | Z i | = M , where the assignments possibly overlap.
Each server computes a set of subfunctions W k = h k ( D k ) for k Z i [ K ] , i Ω . Datasets { D k } k [ K ] could be dependent (We note that by exploiting the temporal and spatial variation or dependence of data, it is possible to decrease the communication cost.) across K , so could { W k } k [ K ] . We denote the number of symbols in each W k by L, which equals the blocklength n. Let X i = { W k } k Z i = W Z i = { h k ( D k ) } k Z i denote the set of subfunctions of the i-th server, X i be the alphabet of X i , and X Ω = ( X 1 , X 2 , , X N ) be the set of subfunctions of all servers. We denote with W k = W k 1 , W k 2 , , W k n and X i = X i 1 , X i 2 , , X i n F q | Z i | × n , the length n sequences of subfunction W k , and of W Z i assigned to server i Ω .

2.2. Cyclic Dataset Placement Model, Computation Capacity, and Recovery Threshold

We assume that the total number of datasets K is divisible by the number of servers N, i.e., K N Δ Z + . The dataset placement on N distributed servers is conducted in a circular or cyclic manner in the number of Δ circular shifts between two consecutive servers, where the shifts are to the right and the final entries are moved to the first positions, if necessary. As a result of cyclic placement, any subset of N r servers covers the set of all datasets to compute the requested functions from the user. Given N r [ N ] , each server has a storage size or computation cost of | Z i | = M = Δ ( N N r + 1 ) , and the amount of dataset overlap between the consecutive servers is Δ ( N N r ) .
Hence, the set of indices assigned to server i Ω is given as follows:
Z i = r = 0 Δ 1 mod { i , N } + r N , mod { i + 1 , N } + r N , , mod { i + N N r , N } + r N ,
where X i = W Z i , i Ω . As a result of (1), the cardinality of the datasets assigned to each server meets the storage capacity constraint M with equality, i.e., | Z i | = M , for all i Ω .

2.3. User Demands and Structure of the Computation

We address the problem of distributed lossless compression of a set of general multi-variable functions F j ( X Ω ) : X 1 × X 2 × X N F q , j [ K c ] , requested by the user from the set of servers, where K c 1 , and the functions are known by the servers and the user. More specifically, the user, from a subset of distributed servers aims to compute in a lossless manner the following length n sequence as n tends to infinity:
F j ( X Ω ) = { F j ( X 1 l , X 2 l , , X N l ) } l = 1 n , j [ K c ] ,
where F j ( X 1 l , X 2 l , , X N l ) is the function outcome for the l-th realization l [ n ] , given the length n sequence. We note that the representation in (2) is the most general form of a (conceivably non-linear) multi-variate function, which encompasses the special cases of separable functions and linearly separable functions, which we discuss next.
In this work, the user seeks to compute functions that are separable to each dataset. Each demanded function f j ( · ) R , j [ K c ] is a function of subfunctions { W k } k K such that W k = h k ( D k ) F q , where h k is a general function (could be linear or non-linear) of dataset D k . Hence, using the relation X i = W Z i = { h k ( D k ) } k Z i , each demanded function j [ K c ] can be written in the following form:
f j ( W K ) = f j ( h 1 ( D 1 ) , , h K ( D K ) ) = F j ( { h k ( D k ) } k Z 1 , , { h k ( D k ) } k Z N ) = F j ( X Ω ) .
In the special case of linearly separable functions (Special instances of the linearly separable representation of subfunctions { W k } k given in (4) are linear functions of the datasets { D k } and are denoted by F j = k γ j k D k .) [39], the demanded functions take the form:
{ F j ( X Ω ) } j [ K c ] = F 1 F 2 F K c = Γ W ,
where W = W 1 W 2 W K F q K × 1 is the subfunction vector, and the coefficient matrix Γ = { γ j k } F q K c × K is known to the master node, servers, and the user. In other words, { F j ( X Ω ) } j [ K c ] is a set of linear maps from the subfunctions { W k } k , where F j ( X Ω ) = k [ K ] γ j k · W k . We do not restrict { F j ( X Ω ) } j [ K c ] to linearly separable functions, i.e., it may hold that { F j ( X Ω ) } j [ K c ] Γ W .

2.4. Communication Cost for the Characteristic-Graph-Based Computing Approach

To compute { F j ( X Ω ) } j [ K c ] , each server i Ω constructs a characteristic graph, denoted by G X i , for compressing X i . More specifically, for asymptotic lossless computation of the demanded functions, the server builds the n-th OR power G X i n of G X i for compressing X i to determine the transmitted information. The minimal possible code rate achievable to distinguish the edges of G X i n as n is given by the Characteristic graph entropy, H G X i ( X i ) . For a primer on key graph-theoretic concepts, characteristic-graph-related definitions, and the fundamental compression limits of characteristic graphs, we refer the reader to [76,79,82]. In this work, we solely focus on the characterization of the total communication cost from all servers to the user, i.e., the achievable sum-rate, without accounting for the costs of communication between the master and the servers and of computations performed at the servers and the user.
Each i Ω builds a mapping from X i to a valid coloring of G X i n , denoted by c G X i n ( X i ) . The coloring c G X i n ( X i ) specifies the color classes of X i that form independent sets to distinguish the demanded function outcomes. Given an encoding function g i that models the transmission of server i Ω for computing { F j ( X Ω ) } j [ K c ] , we denote by Z i = g i ( X i ) = e X i ( c G X i n ( X i ) ) the color encoding performed by server i Ω for X i . Hence, the communication rate of server i Ω , for a sufficiently large blocklength n, where T i is the length for the color encoding performed at i Ω , is
R i = T i L = H ( e X i ( c G X i n ( X i ) ) ) n H G X i ( X i ) , i Ω ,
where the inequality follows from exploiting the achievability of H G X i ( X i ) = lim n 1 n H G X i n χ ( X i ) , where H G X i n χ ( X i ) is the chromatic entropy of the graph G X i n [73,75]. We refer the reader to Appendix A.2 for a detailed description of the notions of chromatic and graph entropies (cf. (A9) and (A10), respectively).
For the multi-server multi-function distributed setup, using the characteristic-graph-based fundamental limit in (5), an achievable sum-rate for asymptotic lossless computation is
R ach = i Ω R i i Ω H G X i ( X i ) .
We next provide our main results in Section 3.

3. Main Results

In this section, we analyze the multi-server multi-function distributed computing framework exploiting the characteristic-graph-based approach in [75]. In contrast to the previous research attempts in this direction, our solution method is general, and it captures (i) general input statistics or dataset distributions or the skew in data instead of assuming uniform distributions, (ii) correlations across datasets, (iii) any dataset placement model across servers beyond the cyclic [39] or the Maddah–Ali and Niesen [83] placements, and (iv) general function classes requested by the user, instead of focusing on a particular function type (see, e.g., [39,67,84]).
Subsequently, we delve into specific function computation scenarios. First, we present our main result (Theorem 1), which is the most general form that captures (i)–(iv). We then demonstrate (in Proposition 1) that the celebrated result of Wan et al. [Theorem 2] [39] can be obtained as a special case of Theorem 1, given that (i) the datasets are i.i.d. and uniform over q-ary fields, (ii) the placement of datasets across servers is cyclic, and (iii) the demanded functions are linearly separable, given as in (4). Under a correlated and identically distributed Bernoulli dataset model with a skewness parameter ϵ ( 0 , 1 ) for datasets, we next present in Proposition 2 the achievable sum rate for computing Boolean functions. Finally, in Proposition 3, we analyze our characteristic-graph-based approach for evaluating multi-linear functions, a pertinent class of non-linear functions, under the assumption of cyclic placement and i.i.d. Bernoulli-distributed datasets with parameter ϵ and derive an upper bound on the sum rate needed. To gain insight into our analytical results and demonstrate the savings in the total communication cost, we provide some numerical examples.
We next present our main theorem (Theorem 1), on the achievable communication cost for the multi-server multi-function topology, which holds for all input statistics under any correlation model across datasets and for the distributed computing of all function classes requested by the user, regardless of the data assignment over the servers’ caches. The key to capturing the structure of general functions in Theorem 1 is the utilization of a characteristic-graph-based compression technique, as proposed by Körner in [75] (For a more detailed description of characteristic graphs and their entropies, see Appendix A.2.).
Theorem 1 
(Achievable sum-rate using the characteristic graph approach for general functions and distributions). In the multi-server multi-function distributed computation model, denoted by T ( N , K , K c , M , N r ) , under general placement of datasets, and for a set of K c general functions { f j ( W K ) } j [ K c ] requested by the user, and under general jointly distributed dataset models, including non-uniform inputs and allowing correlations across datasets, the characteristic-graph-based compression yields the following upper bound on the achievable communication rate:
R ach i = 1 N r min Z i = g i ( X i ) : g i C i H G X i ( X i ) ,
where
  • G X i = j [ K c ] G X i , j is the union characteristic graph (We refer the reader to (A12) (Appendix A.2) for the definition of a union of characteristic graphs.) that server i Ω builds for computing { f j ( W K ) } j [ K c ] ,
  • C i g i denotes a codebook of functions that server i Ω uses for computing { f j ( W K ) } j [ K c ] ,
  • each subfunction W k , k K is defined over a q-ary field such that the characteristic is at least 2, and
  • Z i = g i ( X i ) such that g i C i denotes the transmitted information from server i Ω .
Proof. 
See Appendix B.1. □
Theorem 1 provides a general upper bound on the sum-rate for computing functions for general dataset statistics and correlations and the placement model and allows any function type over a field of characteristic q 2 . We note that in (7), the codebook C i determines the structure of the union characteristic graph G X i , which, in turn, determines the distribution of Z i . Therefore, the tightness of the rate upper bound relies essentially on the codebook selection. We also note that it is possible to analyze the computational complexity of building a characteristic graph and computing the bound in (7) via evaluating the complexity of the transmissions Z i determined by { f j ( W K ) } j [ K c ] for a given i Ω . However, the current manuscript focuses primarily on the cost of communication, and we leave the computational complexity analysis to future work. Because (7) is not analytically tractable, in the following, we focus on special instances of Theorem 1 to gain insights into the effects of input statistics, dataset correlations, and special function classes in determining the total communication cost.
We next demonstrate that the achievable communication cost for the special scenario of the distributed linearly separable computation framework given in [Theorem 2] [39] is embedded by the characterization provided in Theorem 1. We next showcase the achievable sum rate result for linearly separable functions.
Proposition 1 
(Achievable sum-rate using the characteristic graph approach for linearly separable functions and i.i.d. subfunctions over F q ). In the multi-server multi-function distributed computation model, denoted by T ( N , K , K c , M , N r ) , under the cyclic placement of datasets, where K N = Δ Z + , and for a set of K c linearly separable functions, given as in (4), requested by the user, and given i.i.d. uniformly distributed subfunctions over a field of characteristic q 2 , the characteristic-graph-based compression yields the following bound on the achievable communication rate:
R ach min { K c , Δ } N r , 1 K c Δ N r , min { K c , K } , Δ N r < K c .
Proof. 
See Appendix B.2. □
We note that Theorem 1 results in Proposition 1 when three conditions hold: (i) the dataset placement across servers is cyclic following the rule in (1), (ii) the subfunctions W K are i.i.d. and uniform over F q (see (A21) in Appendix B.2), and (iii) the codebook C i is restricted to linear combinations of subfunctions W K , which yields that the independent sets of G X i satisfy a set of linear constraints (We detail these linear constraints in Appendix B.2, where the set of linear equations given in (A22) is used to simplify the entropy H G X i ( X i ) of the union characteristic graph G X i via the expression given in (A20) for evaluating the upper bound given in (A18) on the achievable sum rate for computing the desired functions via exploiting the entropies of the union characteristic graphs for each of the N r servers, given the recovery threshold N r .) in the variables { W k } k Z i . Note that the linear encoding and decoding approach for computing linearly separable functions, proposed by Wan et al. in [Theorem 2] [39], is valid over a field of characteristic q > 3 . However, in Proposition 1, the characteristic of F q is at least 2, i.e., q 2 , generalizing [Theorem 2] [39] to larger input alphabets.
Next, we aim to demonstrate the merits of the characteristic-graph-based compression in capturing dataset correlations within the multi-server multi-function distributed computation framework. More specifically, we restrict the general input statistics in Theorem 1 such that the datasets are correlated and identically distributed, where each subfunction follows a Bernoulli distribution with the same parameter ϵ , i.e., W k Bern ( ϵ ) , with ϵ ( 0 , 1 ) , and the user demands K c arbitrary Boolean functions regardless of the data assignment. Similarly to Theorem 1, the following proposition (Proposition 2) holds for general function types (Boolean) regardless of the data assignment.
Proposition 2 
(Achievable sum-rate using the characteristic graph approach for general functions and identically distributed subfunctions over F 2 ). In the multi-server multi-function distributed computing setting, denoted by T ( N , K , K c , M , N r ) , under the general placement of datasets, and for a set of K c Boolean functions { f j ( W K ) } j [ K c ] requested by the user, and given identically distributed and correlated subfunctions with W k Bern ( ϵ ) , k [ K ] , where ϵ ( 0 , 1 ) , the characteristic-graph-based compression yields the following bound on the achievable communication rate:
R ach i = 1 N r min Z i = g i ( X i ) : g i C i h ( Z i ) ,
where
  • C i g i : { 0 , 1 } M { 0 , 1 } denotes a codebook of Boolean functions that server i Ω uses,
  • Z i = g i ( X i ) such that g i C i denotes the transmitted information from server i Ω ,
  • G X i has two maximal independent sets (MISs), namely, s 0 ( G X i ) and s 1 ( G X i ) , yielding Z i = 0 and Z i = 1 , respectively, and
  • the probability that W Z i yields the function value Z i = 1 is given as
    P ( Z i = 1 ) = P ( W Z i s 1 ( G X i ) ) , i Ω .
Proof. 
See Appendix B.3. □
While, admittedly, the above approach (Proposition 2) may not directly offer sufficient insight, it does employ the new machinery to offer a generality that allows us to plug in any set of parameters to determine the achievable performance.
Contrasting Propositions 1– 2, which give the total communication costs for computing linearly separable and Boolean functions, respectively, over F 2 , Proposition 2, by exploiting the skew and correlations of datasets indexed by Z i , as well as the functions’ structures via the MISs s 0 ( G X i ) and s 1 ( G X i ) of server i Ω , demonstrates that harnessing the correlation across the datasets can indeed reduce the total communication cost versus the setting in Proposition 1, devised with the assumption of i.i.d. and uniformly distributed subfunctions.
The prior works have focused on devising distributed computation frameworks and exploring their communication costs for specific function classes. For instance, in [62], Körner and Marton have restricted the computation to be the binary sum function, and in [72], Han and Kobayashi have classified functions into two categories depending on whether they can be computed at a sum rate that is lower than that of [60]. Furthermore, the computation problem has been studied for specific topologies, e.g., the side information setting in [73,74]. Despite the existing efforts, see, e.g., [62,72,73,74], to the best of our knowledge, for the given multi-server multi-function distributed computing scenario, there is still no general framework for determining the fundamental limits of the total communication cost for computing general non-linear functions. Indeed, for this setting, the most pertinent existing work that applies to general non-linear functions and provides an upper bound on the achievable sum rate is that of Slepian–Wolf [60]. On the other hand, the upper bound on the achievable computation scheme presented in Theorem 1 can provide savings in the communication cost over [60] for functions including linearly separable functions and beyond. To that end, we exploit Theorem 1 to determine an upper bound on the achievable sum-rate for distributed computing of a multi-linear function in the form of
f ( W K ) = k [ K ] W k .
Note that (11) is used in various scenarios, including distributed machine learning, e.g., to reduce variance in noisy datasets via ensemble learning [85] or weighted averaging [86], sensor network applications to aggregate readings for improved data analysis [87], as well as distributed optimization and financial modeling, where these functions play pivotal roles in establishing global objectives and managing risk and return [88,89].
Drawing on the utility of characteristic graphs in capturing the structures of data and functions, as well as input statistics and correlations, and the general result in Theorem 1, our next result, Proposition 3, demonstrates a new upper bound on the achievable sum rate for computing multi-linear functions within the framework of multi-server and multi-function distributed computing via exploiting conditional graph entropies.
Proposition 3 
(Achievable sum-rate using the characteristic graph approach for multi-linear functions and i.i.d. subfunctions over F 2 ). In a multi-server multi-function distributed computing setting, denoted by T ( N , K , K c , M , N r ) , under the cyclic placement of datasets, where K N = Δ Z + , and for computing the multi-linear function ( K c = 1 ), given as in (11), requested by the user, and given i.i.d. uniformly distributed subfunctions W k Bern ( ϵ ) , k [ K ] , for some ϵ ( 0 , 1 ) , the characteristic-graph-based compression yields the following bound on the achievable communication rate:
R ach 1 ( ϵ M ) N * 1 ϵ M · h ( ϵ M ) + ( ϵ M ) N * · 1 Δ N > 0 · h ϵ ξ N ,
where
  • ϵ M = ϵ M denotes the probability that the product of M subfunctions, with W k Bern ( ϵ ) being i.i.d. across k [ K ] , take the value one, i.e., P k S : | S | = M W k = ϵ M ,
  • the variable N * = N N N r + 1 denotes the minimum number of servers needed to compute f ( W K ) , given as in (11), where each of these servers computes a disjoint product of M subfunctions, and
  • the variable Δ N = N N * · ( N N r + 1 ) represents whether an additional server is needed to aid the computation, and if Δ N 1 , then ξ N denotes the number of subfunctions to be computed by the additional server, and similarly to the above, P k S : | S | = ξ N W k = ϵ ξ N .
Proof. 
See Appendix B.4. □
We next detail two numerical examples (Section 4.1 and Section 4.2) to showcase the achievable gains in the total communication cost for Proposition 2 and Proposition 3, respectively.

4. Numerical Evaluations to Demonstrate the Achievable Gains

Given T ( N , K , K c , M , N r ) , to gain insight into our analytical results and demonstrate the savings in the total communication cost, we provide some numerical examples. To demonstrate Proposition 2, in Section 4.1, we focus on computing linearly separable functions, and in Section 4.2 (cf. Proposition 3), we focus on multi-linear functions, respectively.
To that end, to characterize the performance of our characteristic-graph-based approach for linearly separable functions, we denote by η l i n the gain of the sum-rate for the characteristic-graph-based approach given in (9) over the sum-rate of the distributed scheme of Wan et al. in [39], given in (8), and by η S W the gain of the sum-rate in (9) over the sum-rate of the fully distributed approach of Slepian–Wolf [60]. To capture general statistics, i.e., dataset skewness and correlations, and make a fair comparison, we adapt the transmission model of Wan et al. in [39] via modifying the i.i.d. dataset assumption.
We next study an example scenario (Section 4.1) for computing a class of linearly separable functions (4) over F 2 , where each of the demanded functions takes the form f j ( W K ) = k [ K ] γ j k W k mod 2 , j [ K c ] under a specific correlation model across subfunctions. More specifically, when the subfunctions W k Bern ( ϵ ) are identically distributed and correlated across k [ K ] , and Δ Z + , we model the correlation across datasets (a) exploiting the joint PMF model in [Theorem 1] [90] and (b) for a joint PMF described in Table 2. Furthermore, we assume for K c > 1 that Γ = { γ j k } F 2 K c × K is full rank. For the proposed setting, we next demonstrate the achievable gains η l i n of our proposed technique versus ϵ for computing (4) as a function of skew, ϵ , and correlation, ρ , of datasets, K c [ N r ] < K , and other system parameters and showcase the results via Figures 1, 3–5.

4.1. Example Case: Distributed Computing of Linearly Separable Functions over F 2

We consider the computation of the linearly separable functions given in (4) for general topologies, with general N, K, M, N r , K c , over F 2 , with an identical skew parameter ϵ [ 0 , 1 ] for each subfunction, where W k Bern( ϵ ), k [ K ] , using cyclic placement as in (1) and incorporating the correlation between the subfunctions, with the correlation coefficient denoted by ρ . We consider three scenarios, as described next:
  • Scenario I. The number of demanded functions is K c = 1 , where the subfunctions could be uncorrelated or correlated.
This scenario is similar to the setting in [39], although different from [39], which is valid over a field of characteristic q > 3 , we consider F 2 , and in the case of correlations, i.e., when ρ > 0 , we capture the correlations across the transmissions (evaluated from subfunctions of datasets) from distributed servers, as detailed earlier in Section 3. We first assume that the subfunctions are not correlated, i.e., ρ = 0 , and evaluate η l i n for f ( W K ) = k [ K ] W k mod 2 . The parameter of f ( W K ) , i.e., the probability that f ( W K ) takes the value 1 can be computed using the recursive relation
P k S : | S | = l K W k mod 2 = 1 = k S : | S | = l K , k odd P ( B ( K , ϵ ) ) = k ) = ( 1 ϵ l 1 ) · ϵ + ϵ l 1 · ( 1 ϵ ) ϵ l , 1 < l K ,
where B ( K , ϵ ) is the binomial PMF, and ϵ l is the probability of the modulo 2 sum of any 1 < l K subfunctions taking the value one, with W k Bern ( ϵ ) being i.i.d. across k S , with the convention ϵ 1 = ϵ .
Given N r , we denote by N * = N N N r + 1 the minimum number of servers, corresponding to the subset N * Ω , needed to compute f ( W K ) , where each server, with a cache size of M, computes a sum of M subfunctions, where across these N * servers, the sets of subfunctions are disjoint. Hence, P k S : | S | = M W k = ϵ M . Furthermore, the variable Δ N = N N * · ( N N r + 1 ) represents whether additional servers in addition to N * servers are needed to aid the computation, and if Δ N 1 , then Δ · Δ N ξ N denotes the number of subfunctions to be computed by the set of additional servers, namely, I * Ω , and similarly to the above, P k S : | S | = ξ N W k = ϵ ξ N , which is obtained by evaluating ϵ l at l = ξ N .
Adapting (8) for F 2 , we obtain the total communication cost R a c h ( l i n ) for computing the linearly separable function f ( W K ) = k [ K ] W k mod 2 as
R a c h ( l i n ) = i = 1 N r h k Z i W k = N r · h ( ϵ M ) .
Using Proposition 2 and (13), we derive the sum rate for distributed lossless computing of f ( W K ) as
i Ω R i N * · h ( ϵ M ) + 1 Δ N > 0 · h ( ϵ ξ N ) ,
where the indicator function 1 Δ N > 0 captures the rate contribution from the additional server, if any. Using (15), the gain η l i n over the linearly separable solution of [39] is presented as
η l i n = N r · h ( ϵ M ) N * · h ( ϵ M ) + 1 Δ N > 0 · h ( ϵ ξ N ) ,
where h ( ϵ ξ N ) represents the rate needed from the set of additional servers I * Ω , aiding the computation through communicating the sum of the remaining subfunctions in the set C Z I * , where the summation for these remaining functions in C Z I * is denoted as k C Z I * : k i N * Z i , | C | = ξ N W k , which cannot be captured by the set N * .
Given K c = 1 for the given modulo 2 sum function, we next incorporate the correlation model in [90] for each W k , identically distributed with W k Bern ( ϵ ) , and correlation ρ across any two subfunctions. The formulation in [90] yields the following PMF for f ( W K ) :
P ( f ( W K ) = y ) = K y ϵ y ( 1 ϵ ) K y ( 1 ρ ) · 1 y A 1 + ϵ y K ( 1 ϵ ) K y K ρ · 1 y A 2 , y { 0 , , K } ,
where 1 y A 1 and 1 y A 2 are indicator functions, where A 1 = { 0 , 1 , , K } and A 2 = { 0 , K } .
We depict the behavior of our gain, η l i n , using the same topology T ( N , K , K c , M , N r ) as in [39], with different system parameters ( N , K , M , N r ) , under ρ = 0 in Figure 1-(Left). As we increase both N and K, along with the number of active servers, N r , the gain, η l i n , of the characteristic graph approach increases. This stems from the characteristic graph approach to compute functions f ( W K ) of W K using N * servers. From Figure 1-(Right), it is evident that by capturing correlations between the subfunctions, hence, across the servers’ caches, η l i n grows more rapidly until it reaches the maximum of (16), corresponding to η l i n = N r N * = 10 , attributed to full correlation (see, Figure 1-(Right)).
What we can also see is that for ρ = 0 , the gain rises with the increase in ϵ and linearly grows with N r N * . As ρ increases, reaching its maximum at ρ = 1 , the gain is maximized, yielding the minimum communication cost that can be achieved with our technique. Here, the gain η l i n is dictated by the topology and is given as η l i n = N r N * . This linear relation shows that this specific topology can provide a very substantial reduction in the total communication cost, as ρ goes to 1, over the state of the art [39], as shown in Figure 1-(Right) via the purple (solid) curve. Furthermore, one can draw a comparison between the characteristic graph approach and the approach in [60]. Here, we represent the gain as η S W . It is noteworthy that the sum-rate of all servers using the coding approach of Slepian–Wolf [60] is R a c h ( S W ) = H ( W K ) . With ρ = 0 , this expression simplifies to R a c h ( S W ) = K · H ( W k ) , resulting again in a substantial reduction in the communication cost, as we see from R a c h ( l i n ) in (14) for the same topology of the purple (solid) curve as shown in Figure 1-(Right).
  • Scenario II. The number of demanded functions is K c = 2 , where the subfunctions could be uncorrelated or correlated.
To gain insights into the behavior of η l i n , we consider an example distributed computation model with K = N = 3 , N r = 2 , where the subfunctions W 1 , W 2 , W 3 are assigned to X 1 , X 2 , and X 3 in a cyclic manner, with h ( W k ) = ϵ , k [ 3 ] , and K c = 2 with f 1 ( W K ) = W 2 , and f 2 ( W K ) = W 2 + W 3 .
Given N r = 2 , using the characteristic graph approach for individual servers, an achievable compression scheme, for a given ordering i and j of server transmissions, relies on first compression of the characteristic graph G X i constructed by server i Ω that has no side information and then the conditional rate needed for compressing the colors of G X j for any other server j Ω i via incorporating the side information Z i = g i ( X i ) obtained from server i Ω . Thus, contrasting the total communication cost associated with the possible orderings, the minimum total communication cost R a c h ( G ) can be determined (We can generalize (18) to N r > 2 , where, for a given ordering of server transmissions, any consecutive server that transmits sees all previous transmissions as side information and the best ordering that has the minimum total communication cost, i.e., R a c h ( G ) .). The achievable sum rate here takes the form
R a c h ( G ) = min { H G X 1 ( X 1 ) + H G X 2 ( X 2 | Z 1 ) , H G X 2 ( X 2 ) + H G X 1 ( X 1 | Z 2 ) } .
Focusing on the characteristic graph approach, we illustrate how each server builds its union characteristic graph for simultaneously computing f 1 and f 2 according to (A12) (as detailed in Appendix A.2.1), in Figure 2. In (18), the first term corresponds to G X 1 = ( V X 1 , E X 1 ) , where V X 1 = { 0 , 1 } 2 is built using the support of W 1 and W 2 , and the edges E X 1 are built based on the rule that ( x 1 1 , x 1 2 ) E X 1 if F ( x 1 1 , x 2 ) F ( x 1 2 , x 2 ) for some x 2 V X 2 , which, as we see here, requires two colors. Similarly, server 2 constructs G X 2 = ( V X 2 , E X 2 ) given Z 1 , where V X 2 = { 0 , 1 } 2 using the support of W 2 and W 3 , and where Z 1 determines f 1 = W 2 , and hence, to compute f 2 = W 2 + W 3 given f 1 = W 2 , any two vertices taking values (Here, x 2 1 = ( w 2 1 , w 3 1 ) and x 2 2 = ( w 2 2 , w 3 2 ) represent two different realizations of the pair of subfunctions W 2 and W 3 .) x 2 1 = ( w 2 1 , w 3 1 ) V X 2 and x 2 2 = ( w 2 2 , w 3 2 ) V X 2 are connected if w 3 1 w 3 2 . Hence, we require two distinct colors for G X 2 . As a result, the first term yields a sum rate of h ( ϵ ) + h ( ϵ ) = 2 h ( ϵ ) . Similarly, the second term of (18) captures the impact of G X 2 = ( V X 2 , E X 2 ) , where server 2 builds G X 2 using the support of W 2 and W 3 , and G X 2 is a complete graph to distinguish all possible binary pairs to compute f 1 and f 2 , requiring 4 different colors. Given Z 2 , both f 1 and f 2 are deterministic. Hence, given Z 2 , G X 1 has no edges, which means that H G X 1 ( X 1 | Z 2 ) = 0 . As a result, the ordering of server transmission given by the second term of (18) yields the same sum rate of 2 h ( ϵ ) + 0 = 2 h ( ϵ ) . For this setting, the minimum required rate is R a c h ( G ) = 2 h ( ϵ ) , and the configuration captured by the second term provides a lower recovery threshold of N r = 1 versus N r = 2 for the configurations of server transmissions given by the first term (18). The different N r achieved by these two configurations is also captured by Figure 2.
Alternatively, in the linearly separable approach [39], N r servers transmit the requested function of the datasets stored in their caches. For distributed computing of f 1 and f 2 , servers 1 and 2 transmit at rate H ( W 2 ) = h ( ϵ ) , for computing f 1 , and at rate H ( W 2 + W 3 ) , for function f 2 . As a result, the achievable communication cost is given by R a c h ( l i n ) = h ( ϵ ) + h ( W 2 + W 3 ) . Here, for a fair comparison, we update the model studied in [39] to capture the correlation within each server without accounting for the correlation across the servers.
Under this setting, for ρ = 0 , we see that the gain η l i n of the characteristic graph approach over the linearly separable solution of [39] for computing f 1 and f 2 as a function of ϵ [ 0 , 1 ] takes the form
η l i n ( ϵ ) = h ( ϵ ) + h ( 2 ϵ ( 1 ϵ ) ) 2 h ( ϵ ) = 1 , ϵ = { 1 2 } , > 1 , ϵ [ 0 , 1 ] { 1 2 } ,
where η l i n ( ϵ ) > 1 for ϵ 1 2 follows from the concavity of h ( · ) , which yields the inequality h ( 2 ϵ ( 1 ϵ ) ) h ( ϵ ) . Furthermore, η l i n approaches 1.5 as ϵ { 0 , 1 } (see Figure 3).
We next examine the setting where the correlation coefficient ρ is nonzero, using the joint PMF P W 2 , W 3 , as depicted in Table 2, of the required subfunctions ( W 2 and W 3 ) in computing f 1 and f 2 . This PMF describes the joint PMF corresponding to a binary non-symmetric channel model, where the correlation coefficient between W 2 and W 3 is ρ = 1 p 1 ϵ , and where p = ϵ p 1 ϵ . Thus, our gain here compared to the linearly separable encoding and decoding approach of [39] is given as
η l i n = H ( W 2 ) + H ( W 2 + W 3 ) H ( W 2 , W 3 ) = h ( ϵ ) + h ( 2 ϵ p ) h ( ϵ ) + ( 1 ϵ ) h ϵ p 1 ϵ + ϵ h ( p ) .
We consider now the correlation model in Table 2, where coefficient ρ rises in ϵ for a fixed p. In Figure 4-(Left), we illustrate the behavior of η l i n , given by (20), for computing f 1 and f 2 for N r = 2 as a function of p and ϵ , where for this setting, the correlation coefficient ρ is a decreasing function of p and an increasing function of ϵ . We observe from (20) that the gain η l i n satisfies η l i n 1 for all ϵ [ 0 , 1 ] , which monotonically increases in p—and hence monotonically decreases in ρ due to the relation ρ = 1 p 1 ϵ —as a function of the deviation of ϵ from 1 / 2 . For ϵ ( 0.5 , 1 ] , η l i n increases in ϵ . For example, for p = 0.1 then η l i n ( 1 ) = 1.28 , as depicted by the green (solid) curve. Similarly, given ϵ [ 0 , 0.5 ) , decreasing ϵ results in η l i n to exhibit a rising trend, e.g., for p = 0.9 then η l i n ( 0 ) = 1.36 , as shown by the red (dash-dotted) curve. As p approaches one, η l i n goes to 1.5 as ϵ tends to zero, which can be derived from (20). We here note that the gains are generally smaller than in the previous set of comparisons, as shown in Figure 3.
More generally, given a user request consisting of K c = 2 linearly separable functions (i.e., satisfying (4)), and after considering (20) beyond N r = 2 , we see that η l i n is at most N r as ρ approaches one. We next use the joint PMF model used in obtaining (17), where we observe that f 2 ( ( 1 ϵ ) 2 ( 1 ρ ) + ( 1 ϵ ) ρ , 2 ϵ ( 1 ϵ ) ( 1 ρ ) , ϵ 2 ( 1 ρ ) + ϵ ρ ) , to see that the gain takes the form
η l i n = h ( ϵ ) + H ( f 2 ) h ( ϵ ) + ( 1 ϵ ) h ( ζ 1 ) + ϵ h ( ζ 2 ) ,
where ζ 1 = ( 1 ϵ ) ( 1 ρ ) + ρ , and ζ 2 = ( 1 ϵ ) ( 1 ρ ) . For this model, we illustrate η l i n versus ϵ in Figure 4-(Right) for different ρ values. Evaluating (21), the peak achievable gain is attained when ρ = 1 at f 2 ( ( 1 ϵ ) , 0 , ϵ ) , yielding H ( W 2 + W 3 ) = h ( ϵ ) and H ( W 3 | W 2 ) = ( 1 ϵ ) h ( ρ ) = 0 , and hence, a gain η l i n = N r = 2 , as shown by the purple (solid) curve. On the other hand, for ρ = 0 , we observe that f 2 ( ( 1 ϵ ) 2 , 2 ϵ ( 1 ϵ ) , ϵ 2 ) , yielding H ( W 2 + W 3 ) = h ( ( 1 ϵ ) 2 , 2 ϵ ( 1 ϵ ) , ϵ 2 ) = h ( 2 ϵ ( 1 ϵ ) ) + ( ( 1 ϵ ) 2 + ϵ 2 ) h ϵ 2 ϵ 2 + ( 1 ϵ ) 2 and H ( W 3 | W 2 ) = ( 1 ϵ ) h ( ϵ ) + ϵ h ( ϵ ) = h ( ϵ ) , and hence, it can be shown that the gain is lower bounded as η l i n 1.25 .
  • Scenario III. The number of demanded functions is K c [ N r ] , and the number of datasets is equal to the number of servers, i.e., K = N , where the subfunctions are uncorrelated.
We now provide an achievable rate comparison between the approach in [39] and our graph-based approach, as summarized by our Proposition 1, which generalizes the result in [Theorem 2] [39] to finite fields with characteristics q 2 , for the case of ρ = 0 .
Here, to capture dataset skewness and make a fair comparison, we adapt the transmission model of Wan et al. in [39] via modifying the i.i.d. dataset assumption and taking into account the skewness incurred within each server in determining the local computations k S : | S | = M W k at each server.
For the linearly separable model in (4), adapted to account for our setting, exploiting the summation k Z i W k , and ϵ M given in (15), the communication cost for a general number of K c with ρ = 0 is expressed as
R a c h ( l i n ) = N r · h ( ϵ M ) .
In (22), as ϵ approaches 0 or 1, then h ( ϵ M ) 0 . Subsequently, the achievable communication cost for the characteristic graph model can be determined as
R a c h ( G ) = K c · N * · h ( ϵ ) .
To understand the behavior of η l i n = N r K c N * · h ( ϵ M ) h ( ϵ ) , knowing that N r K c N * is a fixed parameter, we need to examine the dynamic component h ( ϵ M ) h ( ϵ ) . Exploiting Schur concavity (A real-valued function f : R n R is Schur concave if f ( x 1 , x 2 , , x n ) f ( y 1 , y 2 , , y n ) holds whenever ( x 1 , x 2 , , x n ) majorizes ( y 1 , y 2 , , y n ) , i.e., i = 1 k x i i = 1 k y i , for all k [ n ] [91].) for the binary entropy function, which tells us that h ( E [ X ] ) E [ h ( X ) ] , we can see that as ϵ approaches 0 or 1, then
lim ϵ { 0 , 1 } h ( ϵ M ) h ( ϵ ) M , M Z + ,
where the inequality between the left- and right-hand sides becomes loose as a function of M. As a result, as ϵ approaches 0 or 1, then η l i n M · N r K c · N * , which follows from exploiting (22), (23) and the achievability of the upper bound in (24). We illustrate the upper bound on η l i n in Figure 5 and demonstrate the η l i n behavior for K c demanded functions across various topologies with circular dataset placement, namely, for various K = N , i.e., when the amount of circular shift between two consecutive servers is Δ = K N = 1 and the cache size is M = N N r + 1 , and for ρ = 0 and ϵ 1 / 2 . We focus only on plotting η l i n for ϵ 1 / 2 , accounting for the symmetry of the entropy function. Therefore, we only plot for ϵ 1 / 2 . The multiplicative coefficient N r K c N * of η l i n determines the growth, which is depicted by the curves.
Thus, we see that for a given topology T ( N , K , K c , M , N r ) with K c demanded functions, for ρ = 0 , using (24), we see that η l i n exponentially grows with term 1 ϵ for ϵ [ 0 , 1 / 2 ] (Here, we note that the behavior of η l i n is symmetric around ϵ = 1 / 2 .), and very substantial reduction in the total communication cost is possible as ϵ approaches { 0 , 1 } , as shown in Figure 5 by the blue (solid) curve. The gain over [Theorem 2] [39], η l i n , for a given topology, changes proportionally to N r K c N * . The gain over [60], η S W , for ρ = 0 linearly scales (Incorporating the dataset skew to Proposition 1 ([Theorem 2] [39]), R a c h ( l i n ) is simplified to (22), which from (24) can linearly grow in M = N N r + 1 at high skew, explaining the inferior performance of Proposition 1 over [60] as a function of the skew.) with K K c N * . For instance, the gain for the blue (solid) curve in Figure 5 is η S W = 10 .
In general, other functions in F 2 , such as bitwise A N D and the multi-linear function (see, e.g., Proposition 3) are more skewed and have lower entropies than linearly separable functions and, hence, are easier to compute. Therefore, the cost given in (23) can serve as an upper bound for the communication costs of those more skewed functions in F 2 .
We have here provided insights into the achievable gains in communication cost for several scenarios. We leave the study of η l i n for more general topologies T ( N , K , K c , M , N r ) and correlation models beyond (17) devised for linearly separable functions, and beyond the joint PMF model in Table 2, as future work.
Proposition 3 illustrates the power of the characteristic graph approach in decreasing the communication cost for distributed computing of multi-linear functions, given as in (11), compared to recovering the local computations k S : | S | = M W k using [60]. We denote by η S W the gain of the sum-rate for the graph entropy-based approach given in (12)—using the conditional entropy-based sum-rate expression in (A30)—over the sum-rate of the fully distributed scheme of Slepian–Wolf [60] for computing (11). For the proposed setting, we next showcase the achievable gains η S W of Proposition 3 via an example and showcase the results via Figure 6.

4.2. Distributed Computation of K-Multi-Linear Functions over F 2

We study the behaviors of η S W versus the skewness parameter ϵ for computing the multi-linear function given in (11) for i.i.d. uniform W k Bern ( ϵ ) , ϵ [ 0 , 1 / 2 ] across k [ K ] , and for a given T ( N , K , K c , M , N r ) with parameters N, K, M = Δ ( N N r + 1 ) , such that N r = N 1 , K c = 1 , ρ = 0 , and the number of replicates per dataset is M N K = 2 . We use Proposition 3 to determine the sum-rate upper bound and illustrate the gains 10 log 10 ( η S W ) in decibels versus ϵ in Figure 6.
From the numerical results in Figure 6 (Left), we observe that the sum-rate gain of the graph entropy-based approach versus the fully distributed approach of [60], η S W , could reach up to more than 10-fold gain in compression rate for uniform and up to 10 6 -fold for skewed data. The results for η S W showcase that our proposed scheme can guarantee an exponential rate reduction over [60] as a function of decreasing ϵ . Furthermore, the sum-rate gains scale linearly with the cache size M, which scales with K given N r = N 1 . Note that η S W diminishes with increasing N when M and Δ are kept fixed. In Figure 6 (Right), for M K , a fixed total cache size M N , and hence, fixed K, the gain η S W for large N and small M is higher versus small N and large M, demonstrating the power of the graph-based approach as the topology becomes more and more distributed.

5. Conclusions

In this paper, we devised a distributed computation framework for general function classes in multi-server multi-function, single-user topologies. Specifically, we analyzed the upper bounds for the communication cost for computing in such topologies, exploiting Körner’s characteristic graph entropy, by incorporating the structures in the dataset and functions, as well as the dataset correlations. To showcase the achievable gains of our framework and perceive the roles of dataset statistics, correlations, and function classes, we performed several experiments under cyclic dataset placement over a field of characteristic two. Our numerical evaluations for distributed computing of linearly separable functions, as demonstrated in Section 4.1 via three scenarios, indicate that by incorporating dataset correlations and skew, it is possible to achieve a very substantial reduction in the total communication cost over the state of the art. Similarly, for distributed computing of multi-linear functions, in Section 4.2, we demonstrate a very substantial reduction in the total communication cost versus the state of the art. Our main results (Theorem 1 and Propositions 1–3) and observations through the examples help us gain insights into reducing the communication cost of distributed computation by taking into account the structures of datasets (skew and correlations) and functions (characteristic graphs).
The potential directions include providing a tighter achievability result for Theorem 1 and devising a converse bound on the sum-rate. They involve conducting experiments under the scheme of the coded scheme of Maddah–Ali and Niesen detailed in [83] in order to capture the finer-grained granularity of placement that can help tighten the achievable rates. They also involve, beyond the special cases detailed in Propositions 1–3, exploring the achievable gains for a broader set of distributed computation scenarios, e.g., over-the-air computing, cluster computing, coded computing, distributed gradient descent, or more generally, distributed optimization and learning and goal-oriented and semantic communication frameworks, which can be reinforced by compression by capturing the skewness, correlations, and placement of datasets, the structures of functions, and topology.

Author Contributions

Conceptualization, D.M. and P.E.; methodology, P.E. and D.M.; software, D.M. and M.R.D.S.; validation, D.M., M.R.D.S. and B.S.; formal analysis, D.M.; investigation, D.M. and M.R.D.S.; resources, D.M.; data curation (not applicable); writing—original draft preparation, D.M. and M.R.D.S.; writing—review and editing, D.M., B.S. and M.R.D.S.; visualization, D.M. and M.R.D.S.; supervision, D.M. and P.E.; project administration, D.M. and P.E.; funding acquisition, D.M. and P.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by a Huawei France-funded Chair towards Future Wireless Networks and supported by the program “PEPR Networks of the Future” of France 2030. Co-funded by the European Union (ERC, SENSIBILITÉ, 101077361, and ERC-PoC, LIGHT, 101101031). The views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors thank Kai Wan at the Huazhong University of Science and Technology, Wuhan, China for interesting discussions.

Conflicts of Interest

The authors declare a conflict of interest with MIT, Northeastern, UT Austin, and Inria Paris research center due to academic relationships. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
achAchievable
BernBernoulli
GGraph
i.i.d.Independent and identically distributed
linLinearly separable encoding
MISMaximal independent set
PMFProbability mass function
SWSlepian–Wolf encoding

Appendix A. Technical Preliminary

Here, we detail the notion of characteristic graphs and their entropy in the context of source compression. We recall that the below proofs use the notation given in Section 1.5.

Appendix A.1. Distributed Source Compression, and Communication Cost

Given statistically dependent, finite-alphabet, i.i.d. random sequences X 1 , X 2 , , X N where X i F q | Z i | × n for i Ω , the Slepian–Wolf theorem gives a theoretical lower bound for the lossless coding rate of distributed servers in the limit as n goes to infinity. Denoting by R i the encoding rate of server i Ω , the sum-rate (or communication cost) for distributed source compression is given by
i S R i H ( X S | X S c ) , S Ω ,
where S denotes the indices of a subset of servers, S c = Ω S its complement, and X S = { X i , i S } .
We recall that in the case of distributed source compression, given by the coding theorem of Slepian–Wolf [60], the encoder mappings specify the bin indices for the server sequences X i . The bin index is such that every bin of each n-vector X i of server i Ω is randomly drawn under the uniform distribution across the set { 0 , 1 , , 2 n R i 1 } of 2 n R i bins. The transmission of server i Ω is e X i ( X i ) , where e X i : X i { 0 , 1 , , 2 n R i 1 } is the encoding function of i Ω onto bins. The total number of symbols in e X i ( X i ) is T i = H ( e X i ( X i ) ) . This value corresponds to the aggregate number of symbols in the transmitted subfunctions from the server. Hence, the communication cost (rate) of i Ω for a sufficiently large n satisfies
R i = T i L = H ( e X i ( X i ) ) n H ( X i ) ,
where the cost can be further reduced via a more efficient mapping e X i ( X i ) if W k , k [ K ] are correlated.

Appendix A.2. Characteristic Graphs, Distributed Functional Compression, and Communication Cost

In this section, we provide a summary of key graph-theoretic points devised by Körner [75] and studied by Alon and Orlitsky [73] and Orlitsky and Roche [74] to understand the fundamental limits of distributed computation.
Let us consider the canonical scenario with two servers, storing X 1 and X 2 , respectively. The user requests a bivariate function F ( X 1 , X 2 ) that could be linearly separable or in general non-linear. Associated with the source pair ( X 1 , X 2 ) is a characteristic graph G, as defined by Witsenhausen [92]. We denote by G X 1 = ( V G X 1 , E G X 1 ) the characteristic graph that server one builds (server two similarly builds G X 2 ) for computing (We detail the compression problem for the simultaneous computation of a set of requested functions in Appendix A.2.1.) F ( X 1 , X 2 ) , determined as a function of X 1 , X 2 , and F, where V G X 1 = X 1 and an edge ( x 1 1 , x 1 2 ) E G X 1 if and only if there exists a x 2 1 X 2 such that P X 1 , X 2 ( x 1 1 , x 2 1 ) · P X 1 , X 2 ( x 1 2 , x 2 1 ) > 0 and F ( x 1 1 , x 2 1 ) F ( x 1 2 , x 2 1 ) . Note that the idea of building G X 1 can also be generalized to multivariate functions, F ( X Ω ) where Ω = [ N ] for N > 2 [82]. In this paper, we only consider vertex colorings. A valid coloring of a graph G X 1 is such that each vertex of G X 1 is assigned a color (code) such that adjacent vertices receive disjoint colors (codes). Vertices that are not connected can be assigned to the same or different colors. The chromatic number χ ( G X 1 ) of a graph G X 1 is the minimum number of colors needed to have a valid coloring of G X 1 [76,77,79].
Definition A1 
(Characteristic graph entropy [73,75]). Given a random variable X 1 with characteristic graph G X 1 = ( V X 1 , E X 1 ) for computing function f ( X 1 , X 2 ) , the entropy of the characteristic graph is expressed as
H G X 1 ( X 1 ) = min X 1 U 1 S ( G X 1 ) I ( X 1 ; U 1 ) ,
where S ( G X 1 ) is the set of all MISs of G X 1 , where an MIS is not a subset of any other independent set, where an independent set of a graph is a set of its vertices in which no two vertices are adjacent [93]. Notation X 1 U 1 S ( G X 1 ) means that the minimization is over all distributions P U 1 , X 1 ( u 1 , x 1 ) such that P U 1 , X 1 ( u 1 , x 1 ) > 0 implies x 1 u 1 , where U 1 is an MIS of G x 1 .
Similarly, the conditional graph entropy for X 1 with characteristic graph G X 1 for computing f ( X 1 , X 2 ) , given X 2 as side information, is defined in [74] using the notation U 1 X 1 X 2 that indicates a Markov chain:
H G X 1 ( X 1 | X 2 ) = min U 1 X 1 X 2 X 1 U 1 S ( G X 1 ) I ( X 1 ; U 1 | X 2 ) .
The Markov chain relation in (A4) implies that H G X 1 ( X 1 | X 2 ) H G X 1 ( X 1 ) [Ch. 2] [94]. In (A4), the goal is to determine the equivalence classes U 1 of x 1 i X 1 that have the same function outcome x 2 1 X 2 such that P X 1 , X 2 ( x 1 i , x 2 1 ) > 0 . We next consider an example to clarify the distinction between characteristic graph entropy, H G X 1 ( X 1 ) , and entropy of a conditional characteristic graph, or conditional graph entropy, H G X 1 ( X 1 | X 2 ) .
Example A1 
(Characteristic graph entropy of ternary random variables [Examples 1–2] [74]). In this example, we first investigate the characteristic graph entropy, H G X 1 ( X 1 ) , and the conditional graph entropy, H G X 1 ( X 1 | X 2 ) .
1. 
Let P X 1 be a uniform PMF over the set { 1 , 2 , 3 } . Assume that G X has only one edge, i.e., E X 1 = { ( 1 , 3 ) } . Hence, the set of MISs is given as S ( G X 1 ) = { { 1 , 2 } , { 2 , 3 } } .
To determine the entropy of a characteristic graph, i.e., H G X 1 ( X 1 ) , from (A3), our objective is to minimize I ( X 1 ; U 1 ) , which is a convex function of P ( U 1 | X 1 ) . Hence, I ( X 1 ; U 1 ) is minimized when the conditional distribution of P ( U 1 | X 1 ) is selected as P ( U 1 = { 1 , 2 } | X 1 = 1 ) = 1 , P ( U 1 = { 2 , 3 } | X 1 = 3 ) = 1 , and P ( U 1 = { 1 , 2 } | X 1 = 2 ) = P ( U 1 = { 2 , 3 } | X 1 = 2 ) = 1 / 2 . As a result of this PMF, we have
H G X 1 ( X 1 ) = H ( U 1 ) H ( U 1 | X 1 ) = 1 1 3 = 2 3 .
2. 
Let P X 1 , X 2 be a uniform PMF over the set { ( x 1 , x 2 ) : x 1 , x 2 { 1 , 2 , 3 } , x 1 x 2 } and E X 1 = { ( 1 , 3 ) } . Note that H ( X 1 | X 2 ) = 1 given the joint PMF. To determine the conditional characteristic graph entropy, i.e., H G X 1 ( X 1 | X 2 ) , using (A4), our objective is to minimize I ( X 1 ; U 1 | X 2 ) , which is convex in P ( U 1 | X 1 ) . Hence, I ( X 1 ; U 1 | X 2 ) is minimized when P ( U 1 | X 1 ) is selected as P ( U 1 = { 1 , 2 } | X 1 = 1 ) = P ( U 1 = { 2 , 3 } | X 1 = 3 ) = 1 , and P ( U 1 = { 1 , 2 } | X 1 = 2 ) = P ( U 1 = { 2 , 3 } | X 1 = 2 ) = 1 / 2 . Hence, we obtain
H ( U 1 | X 2 ) = 1 3 H ( U 1 | X 1 { 2 , 3 } ) + 1 3 H ( U 1 | X 1 { 1 , 3 } ) + 1 3 H ( U 1 | X 1 { 1 , 2 } ) = 1 3 h 1 4 + 1 3 + 1 3 h 1 4 ,
which yields, using U 1 X 1 X 2 , that
H G X 1 ( X 1 | X 2 ) = H ( U 1 | X 2 ) H ( U 1 | X 1 , X 2 ) = H ( U 1 | X 2 ) H ( U 1 | X 1 ) = 1 3 h 1 4 + 1 3 + 1 3 h 1 4 1 3 = 2 3 h 1 4 .
Definition A2 
(Chromatic entropy [73]). The chromatic entropy of a graph G X 1 is defined as
H G X 1 χ ( X 1 ) = min c G X 1 H ( c G X 1 ( X 1 ) ) ,
where the minimization is over the set of colorings such that c G X 1 is a valid coloring of G X 1 .
Let G X 1 n = ( V X 1 n , E X 1 n ) be the n-th OR power of a graph G X 1 for the source sequence X 1 to compress F ( X 1 , X 2 ) . In this OR power graph, V X 1 n = X 1 n and ( x 1 1 , x 1 2 ) E X 1 n , where x 1 1 = ( x 11 1 , x 12 1 , , x 1 n 1 ) and similarly for x 1 2 , when there exists at least one coordinate l [ n ] such that ( x 1 l 1 , x 1 l 2 ) E X 1 . We denote a coloring of G X 1 n by c G X 1 n ( X 1 ) . The encoding function at server one is a mapping from X 1 to the colors c G X 1 n ( X 1 ) of the characteristic graph G X 1 n for computing F ( X 1 , X 2 ) . In other words, c G X 1 n ( X 1 ) specifies the color classes of X 1 such that each color class forms an independent set that induces the same function outcome.
Using Definition A2, we can determine the chromatic entropy of graph G X 1 n as
H G X 1 n χ ( X 1 ) = min c G X 1 n H ( c G X 1 n ( X 1 ) ) .
In [75], Körner has shown the relation between the chromatic and graph entropies, which we detail next.
Theorem A1 
(Chromatic entropy versus graph entropy [75]). The following relation holds between the characteristic graph entropy and the chromatic entropy of graph G X 1 n in the limit of large n:
H G X 1 ( X 1 ) = lim n 1 n H G X 1 n χ ( X 1 ) .
Similarly, from (A9) and (A10), the conditional graph entropy of X 1 given X 2 is given as
H G X 1 ( X 1 | X 2 ) = lim n min c G X 1 n , c G X 2 n 1 n H ( c G X 1 n ( X 1 ) | c G X 2 n ( X 2 ) ) .

Appendix A.2.1. A Characteristic-Graph-Based Encoding Framework for Simultaneously Computing a Set of Functions

The user demands a set of functions { F j ( X Ω ) } j [ K c ] R K c that are possibly non-linear in the subfunctions. In our proposed framework, for the distributed computing of these functions, we leverage characteristic graphs that can capture the structure of subfunctions. To determine the achievable rate of distributed lossless functional compression, we determine the colorings of these graphs and evaluate the entropy of such colorings. In the case of K c > 1 functions, let G X i , j = ( V X i , E X i , j ) be the characteristic graph that server i Ω builds for computing function j [ K c ] . The graphs { G X i , j } j [ K c ] are on the same vertex set.
Union graphs for simultaneously computing a set of functions with side information have been considered in [82], using multi-functional characteristic graphs. A multi-functional characteristic graph is an OR function of individual characteristic graphs for different functions [Definition 45] [82]. To that end, server i Ω creates a union of graphs on the same set of vertices V X i with a set of edges E X i , which satisfies
G X i = j [ K c ] G X i , j = ( V X i , E X i ) , E X i = j [ K c ] E X i , j .
In other words, we need to distinguish the outcomes x i 1 and x i 2 of server X i if there exists at least one function F j ( x Ω ) , j [ K c ] out of K c functions such that F j ( x i 1 , x Ω i 1 ) F j ( x i 2 , x Ω i 1 ) , for some P X Ω ( x i 1 , x Ω i 1 ) · P X Ω ( x i 2 , x Ω i 1 ) > 0 given x Ω i 1 X Ω i . The server then compresses the union G X i by exploiting (A9) and (A10).
In the special case when the number of demanded functions K c is large (or tends to infinity), such that the union of all subspaces spanned by the independent sets of each G X i , j , j [ K c ] is the same as the subspace spanned by X i , MISs of G X i in (A12) for server i Ω become singletons, rendering G X i a complete graph. In this case, the problem boils down to the paradigm of distributed source compression (see Appendix A.1).

Appendix A.2.2. Distributed Functional Compression

The fundamental limit of functional compression has been given by Körner [75]. Given X i F q | Z i | × n for server i Ω , the encoding function e X i specifies MISs given by the valid colorings c G X i n ( X i ) . Let the number of symbols in Z i = g i ( X i ) = e X i ( c G X i n ( X i ) ) be T i for server i Ω . Hence, the communication cost of server i, as n is given by (5).
Defining G X S = [ G X i ] i S for a given subset S Ω chosen to guarantee distributed computation of F ( X Ω ) , i.e., | S | N r , the sum-rate of servers for distributed lossless functional compression for computing F ( X Ω ) = { F ( X 1 l , X 2 l , , X N l ) } l = 1 n equals
R ach = i S R i H G X S ( X S | Z S c ) , S Ω ,
where H G X S ( X S ) is the joint graph entropy of S Ω , and it is defined as [Definition 30] [82]:
H G X S ( X S ) = lim n min c G X i n i S 1 n H ( c G X i n ( X i ) , i S ) ,
where c G X i n ( X i ) is the coloring of the n-th power graph G X i n that i Ω builds for computing f ( X Ω ) [82].
Similarly, exploiting [Definition 31] [82], the conditional graph entropy of the servers is given as
H G X S ( X S | Z S c ) = lim n min c G X i n i Ω 1 n H ( c G X i n ( X i ) , i S | e X i ( c G X i n ( X i ) ) , i S c ) .
Using (A12), we jointly capture the structures of the set of demanded functions. Hence, this enables us to provide a refined communication cost model in (5) versus the characterizations as a function of K c , see, e.g., [39,58,68].

Appendix B. Proofs of Main Results

Appendix B.1. Proof of Theorem 1

Consider the general topology, T ( N , K , K c , M , N r ) , under general placement of datasets, and for a set of K c general functions { f j ( W K ) } j [ K c ] requested by the user, and under general jointly distributed dataset models, including non-uniform inputs and allowing correlations across datasets.
We note that server i Ω builds a characteristic graph (The characteristic-graph-based approach is valid provided that each subfunction W k , k K contained in X i = W Z i is defined over a q-ary field such that q 2 , to ensure that the union graph G X i , i Ω (or G X i , j , j [ K c ] each) has more than one vertex.) G X i , j for distributed lossless computing of f j ( W K ) , j [ K c ] . Similarly, server i Ω builds a union characteristic graph for computing { f j ( W K ) } j [ K c ] . We denote by G X i = ( V X i , E X i ) = j [ K c ] G X i , j the union characteristic graph, given as in (A12). In the description of G X i , the set V X i is the support set of X i , i.e., V X i = X i , and E X i is the union of edges, i.e., E X i = j [ K c ] E X i , j , where E X i , j denotes the set of edges in G X i , j , which is the characteristic graph the server builds for distributed lossless computing f j ( W K ) for a given function j [ K c ] .
To compute the set of demanded functions { f j ( W K ) } j [ K c ] , we assume that server i Ω can use a codebook of functions denoted by C i such that C i g i , where the user can compute its demanded functions using the set of transmitted information { g i ( X i ) } i S provided from any set of | S | = N r servers. More specifically, server i Ω chooses a function g i C i to encode X i . Note that g i represents, in the context of encoding characteristic graphs, the mapping from X i to a valid coloring c G X i ( X i ) . We denote by Z i = g i ( X i ) = e X i ( c G X i n ( X i ) ) the color encoding performed by server i Ω for the length n realization of X i , denoted by X i . For convenience, we use the following shorthand notation to represent the transmitted information from the server:
Z i = g i ( X i ) , i Ω .
Combining the notions of the union graph in (A12) and the encodings of the individual servers given in (A16), the rate R i needed from server i Ω for meeting the user demand is upper bounded by the cost of the best encoding, which minimizes the rate of information transmission from the respective server. Equivalently,
R i min Z i = g i ( X i ) : g i C i H G X i ( X i ) ,
where equality is achievable in (A17). Because the user can recover the desired functions using any set of N r servers, the achievable sum rate is upper bounded by
R ach i = 1 N r min Z i = g i ( X i ) : g i C i H G X i ( X i ) .

Appendix B.2. Proof of Proposition 1

For the multi-server multi-function distributed computing architecture, this proposition restricts the demand to be a set of linearly separable functions, given as in (4). Given the recovery threshold N r , it holds that
(A19) R ach i = 1 N r min Z i = g i ( X i ) : g i C i H G X i ( X i ) = i = 1 N r min Z i : g i C i min X i U i S ( G X i ) I ( X i ; U i ) (A20) = i = 1 N r H ( W ( i 1 ) Δ + 1 ( i 1 ) Δ + M ) H W ( i 1 ) Δ + 1 ( i 1 ) Δ + M | Z i (A21) = i = 1 N r M M H ( Z i ) = i = 1 N r H ( Z i ) ,
where in (A19), we used the identity H G X i ( X i ) = min X i U i S ( G X i ) I ( X i ; U i ) . Furthermore, if the codebook C i is restricted to linear combinations of subfunctions, Z i is given by the following set of linear equations:
Z i = g i ( X i ) = k = ( i 1 ) Δ + 1 ( i 1 ) Δ + M α k ( l ) W k , l [ K c ] .
In other words, Z i , i [ N r ] , is a vector-valued function. Note that each server contributes to determining the set of linearly separable functions { f j ( W K ) , j [ K c ] } of datasets, given as in (4), in a distributed manner. Hence, each independent set U i S ( G X i ) , with S ( G X i ) denoting the set of MISs of X i , of X i is captured by the linear functions of { W k } k [ ( i 1 ) Δ + 1 : ( i 1 ) Δ + M ] , i.e., each U i S ( G X i ) is determined by (A22). Hence, the user can recover the requested functions by linearly combining the transmissions of the N r servers:
f j ( W K ) = i = 1 N r β j i Z i = i = 1 N r β j i g i ( X i ) = k = 1 K γ j k W k , j [ K c ] .
In (A20), we use the definition of mutual information, I ( X i ; U i ) = H ( X i ) H ( X i | U i ) , where given i [ N r ] and Δ = K N , it holds under cyclic placement that
X i = W ( i 1 ) Δ + 1 ( i 1 ) Δ + M = W ( i 1 ) Δ + 1 , W ( i 1 ) Δ + 2 , , W ( i 1 ) Δ + M ,
and α k ( l ) are the coefficients for computing function l [ K c ] . In (A21), we used the fact that W k is uniform over F q and i.i.d. across k [ K ] and rewrote the conditional entropy expression such that
H W ( i 1 ) Δ + 1 ( i 1 ) Δ + M | Z i = H W ( i 1 ) Δ + 1 ( i 1 ) Δ + M , Z i H ( Z i ) = ( a ) H W ( i 1 ) Δ + 1 ( i 1 ) Δ + M H ( Z i ) ,
where ( a ) follows from the fact that Z i is a function of W ( i 1 ) Δ + 1 ( i 1 ) Δ + M . For a given l [ K c ] and field size q, the relation k = ( i 1 ) Δ + 1 ( i 1 ) Δ + M α k ( l ) W k ensures that G X i has q independent sets where each such set U i contains q M 1 different values of X i . Exploiting the fact that W k is i.i.d. and uniform over F q , each element of Z i is uniform over F q . Hence, the achievable sum-rate is upper bounded by
i = 1 N r min Z i : g i C i H G X i ( X i ) K c N r .
Exploiting the cyclic placement model, we can tighten the bound in (A26). Note that server i = 1 can help recover M subfunctions (at most, i.e., M transmissions needed to recover M subfunctions), and each of the servers i [ 2 : N r ] can help recover an additional Δ subfunctions (at most, i.e., Δ transmissions are needed to recover Δ subfunctions). Hence, the set of servers [ N r ] suffices to provide M + ( N r 1 ) Δ = N Δ = K subfunctions and reconstruct any desired function of W K . Due to cyclic placement, each W k is stored in exactly N N r + 1 servers. Now, let us consider the following four scenarios:
(i)
When 1 K c < Δ , it is sufficient for each server to transmit K c linearly independent combinations of their subfunctions. This leads to resolving K c N r linear combinations of K subfunctions from N r servers that are sufficient to derive the demanded K c linear functions. Because K c N r < Δ N r , there are K K c N r > Δ ( N N r ) = M Δ unresolved linear combinations of K subfunctions.
(ii)
When Δ K c Δ N r , it is sufficient for each server to transmit at most Δ linearly independent combinations of their subfunctions. This leads to resolving Δ N r linear combinations of K subfunctions and Δ ( N N r ) = M Δ unresolved linear combinations of K subfunctions.
(iii)
When Δ N r < K c K , each server needs to transmit at a rate K c N r where K c N r > Δ and K c N r K N r = Δ N r + N N r N r = Δ + Δ N N r N r , which gives the number of linearly independent combinations needed to meet the demand. This yields a sum-rate of K c . The subset of servers may need to provide up to an additional Δ ( N N r ) linear combinations, and Δ N N r N r defines the maximum number of additional linear combinations per server, i.e., the required number of combinations when K c = K .
(iv)
When K < K c , it is easy to note that since any K linearly independent equation in (A23) suffices to recover W K , the sum-rate K is achievable.
From (i)–(iv), we obtain the following upper bound on the achievable sum-rate:
i = 1 N r min Z i : g i C i H G X i ( X i ) = K c N r , 1 K c < Δ , Δ N r , Δ K c Δ N r , K c , Δ N r < K c K , K , K < K c ,
where it is easy to note that (A27) matches the communication cost in [Theorem 2] [39]. The i.i.d. distribution assumption for W k ensures that this result holds for any q 2 .

Appendix B.3. Proof of Proposition 2

Similarly as in Theorem 1, we let G X i = j [ K c ] G X i , j denote the union characteristic graph that server i Ω builds for computing { f j ( W K ) } j [ K c ] . Note that given W Z i = W 1 + ( i 1 ) Δ , W 1 + ( i 1 ) Δ + 1 , , W M + ( i 1 ) Δ , the support set of server i Ω has a cardinality of X i = 2 M . Because the user demand is a collection of Boolean functions, in this scenario, each server i Ω builds a graph with two independent sets at most, denoted by s 0 ( G X i ) and s 1 ( G X i ) , yielding the function values Z i = 0 and Z i = 1 , respectively.
Given the recovery threshold N r , any subset S of servers with | S | = N r stores the set K , which is sufficient to compute the demanded functions. Given server i Ω , consider the set of all w Z i W Z i , which satisfies
f ( w Z i , w Z S Z i ) = 1 , w Z S Z i { 0 , 1 } | K Z i | ,
where notation w Z S Z i denotes the dataset values for the set of datasets stored in the subset of servers S i . Note, in general, that K n ( S ) = | Z S | = | i S Z i | . In the case of cyclic placement based on (1), out of the set of all datasets K , there are Δ datasets that belong exclusively to server i Ω . In this case, | K Z i | = K Δ .
Note that (A28) captures the independent set s 1 ( G X i ) w Z i . Equivalently, the set of dataset values W Z i that lands in s 1 ( G X i ) of G X i yields Z i = 1 . The transmitted information takes the value Z i = 1 with a probability
P ( Z i = 1 ) = P ( W Z i s 1 ( G X i ) ) , i Ω ,
using which the upper bound on the achievable sum rate can be determined.

Appendix B.4. Proof of Proposition 3

Recall that W k Bern ( ϵ ) are i.i.d. across k [ K ] , and each server has a capacity M = Δ ( N N r + 1 ) . This means that given the number of datasets K, each server can compute the product of Δ ( N N r + 1 ) subfunctions and, hence, the minimum number of servers to evaluate the multi-linear function f ( W K ) = k [ K ] W k is N * = N N N r + 1 such that given its capacity M = | Z i | , each server can compute the product of a disjoint set of M subfunctions, i.e., k Z i W k , which operates at a rate of R i h ( ϵ M ) , i Ω . Exploiting the characteristic graph approach, we build G X 1 = ( V X 1 , E X 1 ) for X 1 , with respect to variables X Ω X 1 = X 2 , , X N and f ( W K ) , and similarly for other servers to characterize the sum-rate for the computation by evaluating the entropy of each graph.
To evaluate the first term in (12), we choose a total of N * servers with a disjoint set of subfunctions. We denote the selected set of servers by N * Ω , and the collective computation rate of these N * servers, as a function of the conditional graph entropies of these servers, becomes
i N * R i ( a ) H G X i 1 ( X i 1 ) + H G X i 2 ( X i 2 | Z i 1 ) + + H G X i N * ( X i N * | Z i 1 , Z i 2 , , Z i N * 1 ) = ( b ) h ( ϵ M ) + ϵ M h ( ϵ M ) + ( ϵ M ) 2 h ( ϵ M ) + + ( ϵ M ) N * 1 h ( ϵ M ) = ( c ) 1 ( ϵ M ) N * 1 ϵ M · h ( ϵ M ) ,
where ( a ) follows from assuming S = { i 1 , i 2 , , i N * } with no loss of generality, and ( b ) from that the rate of server i l S is positive only when i [ i l 1 ] k Z i W k = 1 , which is true with probability ( ϵ M ) l 1 . Finally, ( c ) follows from employing the sum of the terms in the geometric series, i.e., l = 0 N * 1 ( ϵ M ) l = 1 ( ϵ M ) N * 1 ϵ M (While Proposition 3 uses the conditional graph entropies, the statements of Theorem 1 and Proposition 1, and Proposition 2 do not take into account the notion of conditional graph entropies. However, as indicated in Section 4.1 for computing linearly separable functions, and in Section 4.2 for computing multi-linear functions, respectively, we used the conditional entropy-based sum rate in (A30) to evaluate and illustrate the achievable gains over [39,60].).
In the case of Δ N = N N * · ( N N r + 1 ) > 0 , the product of K subfunctions cannot be determined by N * servers, and we need additional servers I * Ω to aid the computation and determine the outcome of f ( W K ) by computing the product of the remaining ξ N subfunctions. In other words, if Δ N > 0 and i S k Z i W k = 1 , the ( N * + 1 ) -th server determines the outcome of f ( W K ) by computing the product of subfunctions W k Bern ( ϵ ) , k N ξ N + 1 : N that cannot be captured by the previous N * servers. Hence, the additional rate, given by the second term in (12), is given by the product of the term
( ϵ M ) N * = P i S k Z i W k = 1 ,
with 1 Δ N > 0 , and h ϵ ξ N . Combining this rate term with (A30), we prove the statement of the proposition.

References

  1. Yang, C.; Wu, H.; Huang, Q.; Li, Z.; Li, J. Using spatial principles to optimize distributed computing for enabling the physical science discoveries. Proc. Natl. Acad. Sci. USA 2011, 108, 5498–5503. [Google Scholar] [CrossRef]
  2. Shamsi, J.; Khojaye, M.A.; Qasmi, M.A. Data-intensive cloud computing: Requirements, expectations, challenges, and solutions. J. Grid Comput. 2013, 11, 281–310. [Google Scholar] [CrossRef]
  3. Yang, H.; Ding, T.; Yuan, X. Federated Learning With Lossy Distributed Source Coding: Analysis and Optimization. IEEE Trans. Commun. 2023, 71, 4561–4576. [Google Scholar] [CrossRef]
  4. Gan, G. Evaluation of room air distribution systems using computational fluid dynamics. Energy Build. 1995, 23, 83–93. [Google Scholar] [CrossRef]
  5. Gao, Y.; Wang, L.; Zhou, J. Cost-efficient and quality of experience-aware provisioning of virtual machines for multiplayer cloud gaming in geographically distributed data centers. IEEE Access 2019, 7, 142574–142585. [Google Scholar] [CrossRef]
  6. Lushbough, C.; Brendel, V. An overview of the BioExtract Server: A distributed, Web-based system for genomic analysis. In Advances in Computational Biology; Springer: New York, NY, USA, 2010; pp. 361–369. [Google Scholar]
  7. Dean, J.; Ghemawat, S. MapReduce: Simplified data processing on large clusters. Commun. ACM 2008, 51, 107–113. [Google Scholar] [CrossRef]
  8. Grolinger, K.; Hayes, M.; Higashino, W.A.; L’Heureux, A.; Allison, D.S.; Capretz, M.A. Challenges for MapReduce in Big Data. In Proceedings of the IEEE World Congress Services, Anchorage, AK, USA, 27 June–2 July 2014; pp. 182–189. [Google Scholar]
  9. Al-Khasawneh, M.A.; Shamsuddin, S.M.; Hasan, S.; Bakar, A.A. MapReduce a Comprehensive Review. In Proceedings of the International Conference on Smart Computing and Electronic Enterprise (ICSCEE), Shah Alam, Malaysia, 11–12 July 2018; pp. 1–6. [Google Scholar]
  10. Zaharia, M.; Chowdhury, M.; Franklin, M.J.; Shenker, S.; Stoica, I. Spark: Cluster computing with working sets. In Proceedings of the USENIX Workshop on Hot Topics in Cloud Computing, Boston, MA, USA, 22 June 2010. [Google Scholar]
  11. Khumoyun, A.; Cui, Y.; Hanku, L. Spark based distributed deep learning framework for big data applications. In Proceedings of the International Conference on Information Science and Communications Technologies (ICISCT), Tashkent, Uzbekistan, 2–4 November 2016; pp. 1–5. [Google Scholar]
  12. Orgerie, A.C.; Assuncao, M.D.d.; Lefevre, L. A survey on techniques for improving the energy efficiency of large-scale distributed systems. ACM Comput. Surv. 2014, 46, 1–31. [Google Scholar] [CrossRef]
  13. Keralapura, R.; Cormode, G.; Ramamirtham, J. Communication-Efficient Distributed Monitoring of Thresholded Counts. In Proceedings of the ACM SIGMOD International Conference on Management of Data, New York, NY, USA, 27–29 June 2006; pp. 289–300. [Google Scholar]
  14. Li, W.; Chen, Z.; Wang, Z.; Jafar, S.A.; Jafarkhani, H. Flexible constructions for distributed matrix multiplication. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Melbourne, Australia, 12–20 July 2021; pp. 1576–1581. [Google Scholar]
  15. Liu, Y.; Yu, F.R.; Li, X.; Ji, H.; Leung, V.C. Distributed resource allocation and computation offloading in fog and cloud networks with non-orthogonal multiple access. IEEE Trans. Veh. Tech. 2018, 67, 12137–12151. [Google Scholar] [CrossRef]
  16. Noormohammadpour, M.; Raghavendra, C.S. Datacenter traffic control: Understanding techniques and tradeoffs. IEEE Commun. Surv. Tutor. 2017, 20, 1492–1525. [Google Scholar] [CrossRef]
  17. Shivaratri, N.; Krueger, P.; Singhal, M. Load distributing for locally distributed systems. Computer 1992, 25, 33–44. [Google Scholar] [CrossRef]
  18. Bestavros, A. Demand-based document dissemination to reduce traffic and balance load in distributed information systems. In Proceedings of the IEEE Symposium on Parallel and Distributed Processing, San Antonio, TX, USA, 25–28 October 1995; pp. 338–345. [Google Scholar]
  19. Reisizadeh, A.; Prakash, S.; Pedarsani, R.; Avestimehr, A.S. Tree Gradient Coding. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 2808–2812. [Google Scholar]
  20. Ozfatura, E.; Gündüz, D.; Ulukus, S. Gradient Coding with Clustering and Multi-Message Communication. In Proceedings of the IEEE Data Science Workshop, Minneapolis, MN, USA, 2–7 June 2019; pp. 42–46. [Google Scholar]
  21. Tandon, R.; Lei, Q.; Dimakis, A.G.; Karampatziakis, N. Gradient Coding: Avoiding Stragglers in Distributed Learning. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 31 July–3 August 2017. [Google Scholar]
  22. Ye, M.; Abbe, E. Communication-computation efficient gradient coding. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; pp. 5610–5619. [Google Scholar]
  23. Halbawi, W.; Azizan, N.; Salehi, F.; Hassibi, B. Improving Distributed Gradient Descent Using Reed-Solomon Codes. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 2027–2031. [Google Scholar]
  24. Maddah-Ali, M.A.; Niesen, U. Fundamental limits of caching. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Istanbul, Türkiye, 7–12 July 2013; pp. 1077–1081. [Google Scholar]
  25. Karamchandani, N.; Niesen, U.; Maddah-Ali, M.A.; Diggavi, S.N. Hierarchical coded caching. IEEE Trans. Info Theory 2016, 62, 3212–3229. [Google Scholar] [CrossRef]
  26. Li, S.; Supittayapornpong, S.; Maddah-Ali, M.A.; Avestimehr, S. Coded TeraSort. In Proceedings of the IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Lake Buena Vista, FL, USA, 29 May–2 June 2017. [Google Scholar]
  27. Li, S.; Maddah-Ali, M.A.; Yu, Q.; Avestimehr, A.S. A fundamental tradeoff between computation and communication in distributed computing. IEEE Trans. Inf. Theory 2017, 64, 109–128. [Google Scholar] [CrossRef]
  28. Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. The exact rate-memory tradeoff for caching with uncoded prefetching. IEEE Trans. Inf. Theory 2018, 64, 1281–1296. [Google Scholar] [CrossRef]
  29. Naderializadeh, N.; Maddah-Ali, M.A.; Avestimehr, A.S. Fundamental limits of cache-aided interference management. IEEE Trans. Inf. Theory 2017, 63, 3092–3107. [Google Scholar] [CrossRef]
  30. Subramaniam, A.M.; Heidarzadeh, A.; Narayanan, K.R. Collaborative decoding of polynomial codes for distributed computation. In Proceedings of the IEEE Information Theory Workshop (ITW), Visby, Sweden, 25–28 August 2019; pp. 1–5. [Google Scholar]
  31. Dutta, S.; Fahim, M.; Haddadpour, F.; Jeong, H.; Cadambe, V.; Grover, P. On the optimal recovery threshold of coded matrix multiplication. IEEE Trans. Inf. Theory 2019, 66, 278–301. [Google Scholar] [CrossRef]
  32. Yosibash, R.; Zamir, R. Frame codes for distributed coded computation. In Proceedings of the International Symposium on Topics in Coding, Montreal, QC, Canada, 18–21 August 2021; pp. 1–5. [Google Scholar]
  33. Dimakis, A.G.; Godfrey, P.B.; Wu, Y.; Wainwright, M.J.; Ramchandran, K. Network coding for distributed storage systems. IEEE Trans. Inf. Theory 2010, 56, 4539–4551. [Google Scholar] [CrossRef]
  34. Wan, K.; Sun, H.; Ji, M.; Tuninetti, D.; Caire, G. Cache-aided matrix multiplication retrieval. IEEE Trans. Inf. Theory 2022, 68, 4301–4319. [Google Scholar] [CrossRef]
  35. Jia, Z.; Jafar, S.A. On the capacity of secure distributed batch matrix multiplication. IEEE Trans. Inf. Theory 2021, 67, 7420–7437. [Google Scholar] [CrossRef]
  36. Soleymani, M.; Mahdavifar, H.; Avestimehr, A.S. Analog lagrange coded computing. IEEE J. Sel. Areas Inf. Theory 2021, 2, 283–295. [Google Scholar] [CrossRef]
  37. Yu, Q.; Maddah-Ali, M.A.; Avestimehr, S. Polynomial codes: An optimal design for high-dimensional coded matrix multiplication. In Proceedings of the International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4403–4413. [Google Scholar]
  38. López, H.H.; Matthews, G.L.; Valvo, D. Secure MatDot codes: A secure, distributed matrix multiplication scheme. In Proceedings of the IEEE Information Theory Workshop (ITW), Mumbai, India, 6–9 November 2022; pp. 149–154. [Google Scholar]
  39. Wan, K.; Sun, H.; Ji, M.; Caire, G. Distributed linearly separable computation. IEEE Trans. Inf. Theory 2021, 68, 1259–1278. [Google Scholar] [CrossRef]
  40. Zhu, J.; Li, S.; Li, J. Information-theoretically private matrix multiplication from MDS-coded storage. IEEE Trans. Inf. Forensics Secur. 2023, 18, 1680–1695. [Google Scholar] [CrossRef]
  41. Das, A.B.; Ramamoorthy, A.; Vaswani, N. Efficient and Robust Distributed Matrix Computations via Convolutional Coding. IEEE Trans. Inf. Theory. 2021, 67, 6266–6282. [Google Scholar] [CrossRef]
  42. Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. Straggler Mitigation in Distributed Matrix Multiplication: Fundamental Limits and Optimal Coding. IEEE Trans. Inf. Theory. 2020, 66, 1920–1933. [Google Scholar] [CrossRef]
  43. Fawzi, A.; Balog, M.; Huang, A.; Hubert, T.; Romera-Paredes, B.; Barekatain, M.; Novikov, A.; R Ruiz, F.J.; Schrittwieser, J.; Swirszcz, G.; et al. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 2022, 610, 47–53. [Google Scholar] [CrossRef] [PubMed]
  44. Aliasgari, M.; Simeone, O.; Kliewer, J. Private and secure distributed matrix multiplication with flexible communication load. IEEE Trans. Inf. Forensics Secur. 2020, 15, 2722–2734. [Google Scholar] [CrossRef]
  45. D’Oliveira, R.G.; El Rouayheb, S.; Heinlein, D.; Karpuk, D. Notes on communication and computation in secure distributed matrix multiplication. In Proceedings of the IEEE Conference on Communications and Network Security, Virtual, 29 June–1 July 2020; pp. 1–6. [Google Scholar]
  46. Rashmi, K.V.; Shah, N.B.; Kumar, P.V. Optimal exact-regenerating codes for distributed storage at the MSR and MBR points via a product-matrix construction. IEEE Trans. Inf. Theory 2011, 57, 5227–5239. [Google Scholar] [CrossRef]
  47. Cancès, E.; Friesecke, G. Density Functional Theory: Modeling, Mathematical Analysis, Computational Methods, and Applications, 1st ed.; Springer Nature: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
  48. Hanna, O.A.; Ezzeldin, Y.H.; Sadjadpour, T.; Fragouli, C.; Diggavi, S. On distributed quantization for classification. IEEE J. Sel. Areas Inf. Theory 2020, 1, 237–249. [Google Scholar] [CrossRef]
  49. Luo, P.; Xiong, H.; Lü, K.; Shi, Z. Distributed classification in peer-to-peer networks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Jose, CA, USA, 12–15 August 2007; pp. 968–976. [Google Scholar]
  50. Karakus, C.; Sun, Y.; Diggavi, S.; Yin, W. Straggler mitigation in distributed optimization through data encoding. In Proceedings of the International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5434–5442. [Google Scholar]
  51. Jia, Z.; Jafar, S.A. Cross subspace alignment codes for coded distributed batch computation. IEEE Trans. Inf. Theory 2021, 67, 2821–2846. [Google Scholar] [CrossRef]
  52. Wang, J.; Jia, Z.; Jafar, S.A. Price of Precision in Coded Distributed Matrix Multiplication: A Dimensional Analysis. In Proceedings of the IEEE Information Theory Workshop (ITW), Kanazawa, Japan, 17–21 October 2021; pp. 1–6. [Google Scholar]
  53. Chang, W.T.; Tandon, R. On the capacity of secure distributed matrix multiplication. In Proceedings of the IEEE Global Communications Conference, Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
  54. Monagan, M.; Pearce, R. Parallel sparse polynomial multiplication using heaps. In Proceedings of the International Symposium on Symbolic and Algebraic Computation, Seoul, Republic of Korea, 28–31 July 2009; pp. 263–270. [Google Scholar]
  55. Hsu, C.D.; Jeong, H.; Pappas, G.J.; Chaudhari, P. Scalable reinforcement learning policies for multi-agent control. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Prague, Czech Republic, 27 September 27–1 October 2021; pp. 4785–4791. [Google Scholar]
  56. Goldenbaum, M.; Boche, H.; Stańczak, S. Nomographic functions: Efficient computation in clustered Gaussian sensor networks. IEEE Trans. Wirel. Commun. 2014, 14, 2093–2105. [Google Scholar] [CrossRef]
  57. Goldenbaum, M.; Boche, H.; Stańczak, S. Harnessing interference for analog function computation in wireless sensor networks. IEEE Trans. Signal Process. 2013, 61, 4893–4906. [Google Scholar] [CrossRef]
  58. Huang, W.; Wan, K.; Sun, H.; Ji, M.; Qiu, R.C.; Caire, G. Fundamental Limits of Distributed Linearly Separable Computation under Cyclic Assignment. In Proceedings of the IEEE International Symposium on Information Theory (ISIT’23), Taipei, Taiwan, 25–30 June 2023. [Google Scholar]
  59. Wan, K.; Sun, H.; Ji, M.; Caire, G. On Secure Distributed Linearly Separable Computation. IEEE J. Sel. Areas Commun. 2022, 40, 912–926. [Google Scholar] [CrossRef]
  60. Slepian, D.; Wolf, J.K. Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19, 471–480. [Google Scholar] [CrossRef]
  61. Cover, T. A proof of the data compression theorem of Slepian and Wolf for ergodic sources. IEEE Trans. Inf. Theory 1975, 21, 226–228. [Google Scholar] [CrossRef]
  62. Korner, J.; Marton, K. How to encode the modulo-two sum of binary sources. IEEE Trans. Inf. Theory 1979, 25, 219–221. [Google Scholar] [CrossRef]
  63. Lalitha, V.; Prakash, N.; Vinodh, K.; Kumar, P.V.; Pradhan, S.S. Linear coding schemes for the distributed computation of subspaces. IEEE J. Sel. Areas Commun. 2013, 31, 678–690. [Google Scholar] [CrossRef]
  64. Yamamoto, H. Wyner-Ziv theory for a general function of the correlated sources. IEEE Trans. Inf. Theory 1982, 28, 803–807. [Google Scholar] [CrossRef]
  65. Wyner, A.; Ziv, J. The rate-distortion function for source coding with side information at the decoder. IEEE Trans. Inf. Theoy 1976, 22, 1–10. [Google Scholar] [CrossRef]
  66. Wan, K.; Sun, H.; Ji, M.; Tuninetti, D.; Caire, G. Cache-Aided General Linear Function Retrieval. Entropy 2020, 23, 25. [Google Scholar] [CrossRef]
  67. Khalesi, A.; Elia, P. Multi-User Linearly-Separable Distributed Computing. IEEE. Trans. Inf. Theory 2023, 69, 6314–6339. [Google Scholar] [CrossRef]
  68. Wan, K.; Sun, H.; Ji, M.; Caire, G. On the Tradeoff Between Computation and Communication Costs for Distributed Linearly Separable Computation. IEEE Trans. Commun. 2021, 69, 7390–7405. [Google Scholar] [CrossRef]
  69. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine learning for medical imaging. Radiographics 2017, 37, 505–515. [Google Scholar] [CrossRef] [PubMed]
  70. Correa, N.M.; Adali, T.; Li, Y.O.; Calhoun, V.D. Canonical Correlation Analysis for Data Fusion and Group Inferences. IEEE Signal Process. Mag. 2010, 27, 39–50. [Google Scholar] [CrossRef] [PubMed]
  71. Kant, G.; Sangwan, K.S. Predictive modeling for power consumption in machining using artificial intelligence techniques. Procedia CIRP 2015, 26, 403–407. [Google Scholar] [CrossRef]
  72. Han, T.; Kobayashi, K. A dichotomy of functions F(X, Y) of correlated sources (X, Y). IEEE Trans. Inf. Theory 1987, 33, 69–76. [Google Scholar] [CrossRef]
  73. Alon, N.; Orlitsky, A. Source coding and graph entropies. IEEE Trans. Inf. Theory 1996, 42, 1329–1339. [Google Scholar] [CrossRef]
  74. Orlitsky, A.; Roche, J.R. Coding for computing. IEEE Trans. Inf. Theory 2001, 47, 903–917. [Google Scholar] [CrossRef]
  75. Körner, J. Coding of an information source having ambiguous alphabet and the entropy of graphs. In Proceedings of the Prague Conference on Information Theory, Prague, Czech Republic, 19–25 September 1973. [Google Scholar]
  76. Malak, D. Fractional Graph Coloring for Functional Compression with Side Information. In Proceedings of the IEEE Information Theory Workshop (ITW), Mumbai, India, 6–9 November 2022. [Google Scholar]
  77. Malak, D. Weighted graph coloring for quantized computing. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Taipei, Taiwan, 25–30 June 2023; pp. 2290–2295. [Google Scholar]
  78. Charpenay, N.; Le Treust, M.; Roumy, A. Complementary Graph Entropy, AND Product, and Disjoint Union of Graphs. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Taipei, Taiwan, 25–30 June 2023; pp. 2446–2451. [Google Scholar]
  79. Deylam Salehi, M.R.; Malak, D. An Achievable Low Complexity Encoding Scheme for Coloring Cyclic Graphs. In Proceedings of the Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 26–29 September 2023; pp. 1–8. [Google Scholar]
  80. Maugey, T.; Rizkallah, M.; Mahmoudian Bidgoli, N.; Roumy, A.; Guillemot, C. Graph Spectral 3D Image Compression. In Graph Spectral Image Processing; Wiley: Hoboken, NJ, USA, 2021; pp. 105–128. [Google Scholar]
  81. Sevilla, J.L.; Segura, V.; Podhorski, A.; Guruceaga, E.; Mato, J.M.; Martinez-Cruz, L.A.; Corrales, F.J.; Rubio, A. Correlation between gene expression and GO semantic similarity. IEEE/ACM Trans. Comput. Biol. Bioinf. 2005, 2, 330–338. [Google Scholar] [CrossRef] [PubMed]
  82. Feizi, S.; Médard, M. On network functional compression. IEEE Trans. Inf. Theory 2014, 60, 5387–5401. [Google Scholar] [CrossRef]
  83. Maddah-Ali, M.A.; Niesen, U. Fundamental limits of caching. IEEE Trans. Inf. Theory 2014, 60, 2856–2867. [Google Scholar] [CrossRef]
  84. Mosk-Aoyama, D.; Shah, D. Fast Distributed Algorithms for Computing Separable Functions. IEEE. Trans. Info. Theory 2008, 54, 2997–3007. [Google Scholar] [CrossRef]
  85. Kaur, G. A comparison of two hybrid ensemble techniques for network anomaly detection in spark distributed environment. J. Inform. Secur. Appl. 2020, 55, 102601. [Google Scholar] [CrossRef]
  86. Chen, J.; Li, J.; Huang, R.; Yue, K.; Chen, Z.; Li, W. Federated learning for bearing fault diagnosis with dynamic weighted averaging. In Proceedings of the International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence, Nanjing, China, 21–23 October 2021; pp. 1–6. [Google Scholar]
  87. Zhao, J.; Govindan, R.; Estrin, D. Computing aggregates for monitoring wireless sensor networks. In Proceedings of the IEEE International Workshop on Sensor Network Protocols and Applications, Anchorage, AK, USA, 1 January 2003; pp. 139–148. [Google Scholar]
  88. Giselsson, P.; Rantzer, A. Large-Scale and Distributed Optimization: An Introduction, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2018; Volume 2227. [Google Scholar]
  89. Kavadias, S.; Chao, R.O. Resource Allocation and New Product Development Portfolio Management, 1st ed.; Elsevier: Amsterdam, The Netherlands; Butterworth-Heinemann: Oxford, UK, 2007; pp. 135–163. [Google Scholar]
  90. Diniz, C.A.R.; Tutia, M.H.; Leite, J.G. Bayesian analysis of a correlated binomial model. Braz. J. Probab. Stat. 2010, 24, 68–77. [Google Scholar] [CrossRef]
  91. Boland, P.J.; Proschan, F.; Tong, Y. Some majorization inequalities for functions of exchangeable random variables. Lect. Not.-Mono. Ser. 1990, 85–91. [Google Scholar]
  92. Witsenhausen, H. The zero-error side information problem and chromatic numbers (corresp.). IEEE Trans. Inf. Theory 1976, 22, 592–593. [Google Scholar] [CrossRef]
  93. Moon, J.W.; Moser, L. On cliques in graphs. Israel J. Math. 1965, 3, 23–28. [Google Scholar] [CrossRef]
  94. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1991. [Google Scholar]
Figure 1. The gain η l i n of the characteristic graph approach for K c = 1 in Section 4.1 (Scenario I). (Left) ρ = 0 for various distributed topologies. (Right) The correlation model given as (17) for T ( 30 , 30 , 1 , 11 , 20 ) with different ϵ values.
Figure 1. The gain η l i n of the characteristic graph approach for K c = 1 in Section 4.1 (Scenario I). (Left) ρ = 0 for various distributed topologies. (Right) The correlation model given as (17) for T ( 30 , 30 , 1 , 11 , 20 ) with different ϵ values.
Entropy 26 00448 g001
Figure 2. Colorings of graphs in Section 4.1 (Scenario II). (Top Left–Right) Characteristic graphs G X 1 and G X 2 , respectively. (Bottom Left–Right) The minimum conditional entropy colorings of G X 1 given c G X 2 and G X 2 given c G X 1 , respectively.
Figure 2. Colorings of graphs in Section 4.1 (Scenario II). (Top Left–Right) Characteristic graphs G X 1 and G X 2 , respectively. (Bottom Left–Right) The minimum conditional entropy colorings of G X 1 given c G X 2 and G X 2 given c G X 1 , respectively.
Entropy 26 00448 g002
Figure 3. η l i n in (19) versus ϵ , for distributed computing of f 1 = W 2 and f 2 = W 2 + W 3 , where K c = 2 , N r = 2 , with ρ = 0 , in Section 4.1 (Scenario II).
Figure 3. η l i n in (19) versus ϵ , for distributed computing of f 1 = W 2 and f 2 = W 2 + W 3 , where K c = 2 , N r = 2 , with ρ = 0 , in Section 4.1 (Scenario II).
Entropy 26 00448 g003
Figure 4. η l i n versus ϵ , for distributed computing of f 1 = W 2 and f 2 = W 2 + W 3 , where K c = 2 , N r = 2 , in Section 4.1, using different joint PMF models for P W 2 , W 3 (Scenario II). (Left) η l i n in (20) for the joint PMF in Table 2 for different values of p. (Right) η l i n for the joint PMF in (17) for different values of ρ .
Figure 4. η l i n versus ϵ , for distributed computing of f 1 = W 2 and f 2 = W 2 + W 3 , where K c = 2 , N r = 2 , in Section 4.1, using different joint PMF models for P W 2 , W 3 (Scenario II). (Left) η l i n in (20) for the joint PMF in Table 2 for different values of p. (Right) η l i n for the joint PMF in (17) for different values of ρ .
Entropy 26 00448 g004
Figure 5. η l i n in a logarithmic scale versus ϵ for K c demanded functions for various values of K c , with ρ = 0 for different topologies, as detailed in Section 4.1 (Scenario III).
Figure 5. η l i n in a logarithmic scale versus ϵ for K c demanded functions for various values of K c , with ρ = 0 for different topologies, as detailed in Section 4.1 (Scenario III).
Entropy 26 00448 g005
Figure 6. Gain 10 log 10 ( η S W ) versus ϵ for computing (11), where K c = 1 , ρ = 0 , N r = N 1 . (Left) The set of parameters N, K, and M are indicated for each configuration. (Right) 10 log 10 ( η S W ) versus ϵ to observe the effect of N for a fixed total cache size M N and fixed K.
Figure 6. Gain 10 log 10 ( η S W ) versus ϵ for computing (11), where K c = 1 , ρ = 0 , N r = N 1 . (Left) The set of parameters N, K, and M are indicated for each configuration. (Right) 10 log 10 ( η S W ) versus ϵ to observe the effect of N for a fixed total cache size M N and fixed K.
Entropy 26 00448 g006
Table 1. Notation.
Table 1. Notation.
Distributed-Computation-System-Related DefinitionsSymbols
Number of distributed servers; set of distributed servers; capacity of a serverN; Ω ; M
Set of datasets; dataset catalog size { D k } k [ K ] ; K = | K |
Subfunction k Z i [ K ] W k = h k ( D k )
The number of symbols in each W k ; blocklengthL; n
Set of indices of datasets assigned to server i Ω such that | Z i | M Z i [ K ]
Set of subfunctions corresponding to a subset of servers with indices i S for S Ω X S = { X i : i S }
Recovery threshold N r
Number of demanded functions by the user K c
Number of symbols per transmission of server i Ω T i
Topology of the multi-server multi-function distributed computing setting T ( N , K , K c , M , N r )
Graph-Theoretic DefinitionsSymbols
Characteristic graph that server i builds for computing F ( X Ω ) G X i , i Ω
Union of characteristic graphs that server i builds for computing { F j ( X Ω ) } j [ K c ] G X i , i Ω
Maximal independent set (MIS); set of all MISs of i Ω U 1 ; S ( G X 1 )
A valid coloring of G X i c G X i
n-th OR power graph; a valid coloring of the n-th OR power graph G X i n ; c G X i n ( X i )
Characteristic graph entropy of X i H G X i ( X i )
Conditional characteristic graph entropy of X i such that i S given X S c H G X i ( X i | X S c )
Table 2. Joint PMF P W 2 , W 3 of W 2 and W 3 with a crossover parameter p, in Section 4.1 (Scenario II).
Table 2. Joint PMF P W 2 , W 3 of W 2 and W 3 with a crossover parameter p, in Section 4.1 (Scenario II).
P W 2 , W 3 ( W 2 , W 3 ) W 2 = 0 W 2 = 1
W 3 = 0 ( 1 ϵ ) ( 1 p ) ϵ p
W 3 = 1 ϵ p ϵ ( 1 p )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Malak, D.; Deylam Salehi, M.R.; Serbetci, B.; Elia, P. Multi-Server Multi-Function Distributed Computation. Entropy 2024, 26, 448. https://doi.org/10.3390/e26060448

AMA Style

Malak D, Deylam Salehi MR, Serbetci B, Elia P. Multi-Server Multi-Function Distributed Computation. Entropy. 2024; 26(6):448. https://doi.org/10.3390/e26060448

Chicago/Turabian Style

Malak, Derya, Mohammad Reza Deylam Salehi, Berksan Serbetci, and Petros Elia. 2024. "Multi-Server Multi-Function Distributed Computation" Entropy 26, no. 6: 448. https://doi.org/10.3390/e26060448

APA Style

Malak, D., Deylam Salehi, M. R., Serbetci, B., & Elia, P. (2024). Multi-Server Multi-Function Distributed Computation. Entropy, 26(6), 448. https://doi.org/10.3390/e26060448

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop