Abstract
The work here studies the communication cost for a multi-server multi-task distributed computation framework, as well as for a broad class of functions and data statistics. Considering the framework where a user seeks the computation of multiple complex (conceivably non-linear) tasks from a set of distributed servers, we establish the communication cost upper bounds for a variety of data statistics, function classes, and data placements across the servers. To do so, we proceed to apply, for the first time here, Körner’s characteristic graph approach—which is known to capture the structural properties of data and functions—to the promising framework of multi-server multi-task distributed computing. Going beyond the general expressions, and in order to offer clearer insight, we also consider the well-known scenario of cyclic dataset placement and linearly separable functions over the binary field, in which case, our approach exhibits considerable gains over the state of the art. Similar gains are identified for the case of multi-linear functions.
1. Introduction
Distributed computing plays an increasingly significant role in accelerating the execution of computationally challenging and complex computational tasks. This growth in influence is rooted in the innate capability of distributed computing to parallelize computational loads across multiple servers. This same parallelization renders distributed computing as an indispensable tool for addressing a wide array of complex computational challenges, spanning scientific simulations, and extracting various spatial data distributions [1], data-intensive analyses for cloud computing [2], and machine learning [3], as well as applications in various other fields such as computational fluid dynamics [4], high-quality graphics for movie and game rendering [5], and a variety of medical applications [6], to name just a few. In the center of this ever-increasing presence of parallelized computing stand modern parallel processing techniques, such as MapReduce [7,8,9] and Spark [10,11].
However, for distributed computing to achieve the desirable parallelization effect, there is an undeniable need for massive information exchange to and from the various network nodes. Reducing this communication load is essential for scalability [12,13,14,15] in various topologies [16,17,18]. Central to the effort to reduce communication costs stand coding techniques such as those found in [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36], including gradient coding [21] and different variants of coded distributed computing that nicely yield gains in reliability, scalability, computation speed, and cost-effectiveness [24]. Similar communication-load aspects are often addressed via polynomial codes [37], which can mitigate stragglers and enhance the recovery threshold, while MatDot codes, devised in [31,38] for secure distributed matrix multiplication, can decrease the number of transmissions for distributed matrix multiplication. This same emphasis on reducing communication costs is even more prominent in works like [31,34,35,38,39,40,41,42,43,44,45,46], which again, focus on distributed matrix multiplication. For example, focusing on a cyclic dataset placement model, the work in [39] provided useful achievability results, while the authors of [35] have characterized achievability and converse bounds for secure distributed matrix multiplication. Furthermore, the work in [34] found creative methods to exploit the correlation between the entries of the matrix product in order to reduce the cost of communication.
1.1. The Multi-Server Multi-Function Distributed Computing Setting and the Need for Accounting for General Non-Linear Functions
As computing requirements become increasingly challenging, distributed computing models have also evolved to be increasingly complex. One such recent model is the multi-server multi-function distributed computing model that consists of a master node, a set of distributed servers, and a user demanding the computation of multiple functions. The master contains the set of all datasets and allocates them to the servers, which are then responsible for computing a set of specific subfunctions for the datasets. This multi-server multi-function setting was recently studied by Wan et al. in [39] for the class of linearly separable functions, which nicely captures a wide range of real-world tasks [7] such as convolution [41], the discrete Fourier transform [47], and a variety of other cases as well. This same work bounded the communication cost, employing linear encoding and linear decoding that leverage the structure of requests.
At the same time, however, there is growing need to consider more general classes of functions, including non-linear functions, such as is often the case with subfunctions that produce intermediate values in MapReduce operations [7] or that relate to quantization [48], classification [49], and optimization [50]. Intense interest can also be identified in the aforementioned problem of distributed matrix multiplication, which has been explored in a plethora of works, which include [35,42,45,51,52,53], with a diverse focus that entails secrecy [45,51,53], as well as precision and stragglers [14,35,42,52], to name a few. In addition to matrix multiplication, other important non-linear function classes include sparse polynomial multiplication [54], permutation invariant functions [55]—which often appear in multi-agent settings and have applications in learning, combinatorics, and graph neural networks—as well as nomographic functions [56,57], which can appear in the context of sensor networks and which have strong connections with interference exploitation and lattice codes, as nicely revealed in [56,57].
Our own work here is indeed motivated by this emerging need for distributed computing of non-linear functions, and our goal is to now consider general functions in the context of the multi-server multi-function distributed computing framework while also capturing dataset statistics and correlations and while exploiting the structural properties of the (possibly non-linear) functions requested by the user. For this purpose, we go beyond the linear coding approaches in [39,58,59] and devise demand-based encoding–decoding solutions. Furthermore, we adopt—in the context of the multi-server multi-function framework—the powerful tools from characteristic graphs that are specifically geared toward capturing both the statistical structure of the data as well as the properties of functions beyond the linear case. To help the reader better understand our motivation and contribution, we proceed with a brief discussion on data structure and characteristic graphs.
1.2. Data Correlation and Structure
Crucial in reducing the communication bottleneck of distributed computing is an ability to capture the structure that appears in modern datasets. Indeed, even before computing considerations come into play, capturing the general structure of the data has been crucial in reducing the communication load in various scenarios such as those in the seminal work by Slepian–Wolf [60] and Cover [61]. Similarly, when function computation is introduced, data structure can be a key component. In the context of computing, we have seen the seminal work by Körner and Marton [62], which focused on efficient compression of the modulo 2 sum of two statistically dependent sources, while Lalitha et al. [63] explored linear combinations of multiple statistically dependent sources. Furthermore, for general bivariate functions of correlated sources, when one of the sources is available as side information, the work of Yamamoto [64] generalized the pioneering work of Wyner and Ziv [65] to provide a rate-distortion characterization for the function computation setting.
It is the case, however, that when the computational model becomes more involved—as is the case in our multi-server multi-function scenario here—the data may often be treated as unstructured and independent [39,58,66,67,68]. This naturally allows for crucial analytical tractability, but it may often ignore the potential benefits of accounting for statistical skews and correlations in data when aiming to reduce communication costs in distributed computing. Furthermore, this comes at a time when more and more function computation settings—such as in medical imaging analysis [69], data fusion, and group inferences [70], as well as predictive modeling for artificial intelligence [71]—entail datasets with prominent dependencies and correlations. While various works, such as those by Körner–Marton [62], Han–Kobayashi [72], Yamamoto [64], Alon–Orlitsky [73], and Orlitsky–Roche [74], provide crucial breakthroughs in exploiting data structure, to the best of our knowledge, in the context of fully distributed function computation, the structure in functions and data has yet to be considered simultaneously.
1.3. Characteristic Graphs
To jointly account for this structure in both data and functions, we draw from the powerful literature on characteristic graphs, introduced by Körner for source coding [75] and used in data compression [62,73,74,76,77,78], cryptography [79], image processing [80], and bioinformatics [81]. For example, toward understanding the fundamental limits of distributed functional compression, the work in [75] devised the graph entropy approach in order to provide the best possible encoding rate of an information source with vanishing error probability. This same approach, while capturing both function structure and source structure, was presented for the case of one source, and it is not directly applicable to the distributed computing setting. Similarly, the zero-error side information setting in [73] and the lossy encoding setting in [64,74] use Körner’s graph entropy [75] approach to capture both function structure and source structure but were again presented for the case of one source. A similar focus can be found in the works in [73,74,76,77,79]. The same characteristic graph approach nicely used by Feizi and Médard in [82] for the distributed computing setting, for a simple distributed computing framework, and in the absence of considerations for the data structure.
Characteristic graphs, which are used in fully distributed architectures to compress information, can allow us to capture various data statistics and correlations, various data placement arrangements, and various function types. This versatility motivates us to employ characteristic graphs in our multi-server multi-function architecture for distributed computing of non-linear functions.
1.4. Contributions
In this paper, leveraging fundamental principles from source and functional compression, as well as graph theory, we study a general multi-server multi-function distributed computing framework composed of a single user requesting a set of functions, which are computed with the assistance of distributed servers that have partial access to the datasets. To achieve our goal, we consider the use of Körner’s characteristic graph framework [75] in our multi-server multi-function setting and proceed to establish upper bounds on the achievable sum-rates reflecting the setting’s communication requirements.
By extending, for the first time here, Körner’s characteristic graph framework [75] to the new multi-server multi-function setting, we are able to reflect the nature of the functions and data statistics in order to allow each server to build a codebook of encoding functions that determine the transmitted information. Each server, using its own codebook, can transmit a function (or a set of functions) of the subfunctions of the data available in its storage and to then provide the user with sufficient information for evaluating the demanded functions. The codebooks allow for a substantial reduction in the communication load.
The employed approach allows us to account for general dataset statistics, correlations, dataset placement, and function classes, thus yielding gains over the state of the art [39,60], as showcased in our examples for the case of linearly separable functions in the presence of statistically skewed data, as well as for the case of multi-linear functions where the gains are particularly prominent, again under statistically skewed data. For this last case of multi-linear functions, we provide an upper bound on the achievable sum-rate (see Section 4.2) under a cyclic placement of the data that reside in the binary field. We also provide a generalization of some elements in the existing works on linearly separable functions [39,58].
In the end, our work demonstrates the power of using characteristic-graph-based encoding for exploiting the structural properties of functions and data in distributed computing, as well as provides insights into fundamental compression limits, all for the broad scenario of multi-server multi-function distributed computation.
1.5. Paper Organization
The rest of this paper is structured as follows. Section 2 describes the system model for the multi-server multi-function architecture, and Section 3 details the main results on the communication cost or sum-rate bounds under the general dataset distributions and correlations, dataset placement models, and general function classes requested by the user over a field of characteristic , through employing the characteristic graph approach, and contrasts the sum-rate with the relevant prior works, e.g., [39,60]. Finally, we summarize our key results and outline possible future directions in Section 5. We provide a primer for the key definitions and results on characteristic graphs and their fundamental compression limits in Appendix A and give proofs of our main results in Appendix B.
Notation: We denote by the Shannon entropy of random variable X drawn from distribution or probability mass function (PMF) . Let be the joint PMF of two random variables and , where and are not necessarily independent and identically distributed (i.i.d.), i.e., equivalently, the joint PMF is not in product form. The notation denotes that X is Bernoulli distributed with parameter . Let denote the binary entropy function and denote the entropy of a binomial random variable of size , with modeling the success probability of each Boolean-valued outcome. The notation denotes a subset of servers with indices for . The notation denotes the complement of . We denote the probability of an event A by . The notation denotes the indicator function, which takes the value 1 if and 0 otherwise. The notation denotes the characteristic graph that server builds for computing . The measures and denote the entropy of characteristic graph and the conditional graph entropy for random variable X given Y, respectively. The notation shows the topology of the distributed system. We note that denotes the indices of datasets stored in , and the notation represents the cardinality of the datasets in the union of the sets in for a given subset of servers. We also note that , , and for such that . We use the convention if a divides b. We provide the notation in Table 1.
Table 1.
Notation.
2. System Model
This section outlines our multi-server multi-function architecture and details our main technical contributions, namely, the communication cost for the problem of distributed computing of general non-linear functions and the cost for special instances of the computation problem under some simplifying assumptions on the dataset statistics, dataset correlations, placement, and the structures of functions.
In the multi-server multi-function distributed computation framework, the master has access to the set of all datasets and distributes the datasets across the servers. The total number of servers is N, and each server has a capacity of M. Communication from the master to the servers is allowed, whereas the servers are distributed and cannot collaborate. The user requests functions that could be non-linear. Given the dataset assignment to the servers, any subset of servers is sufficient to compute the functions requested. We denote by the topology for the described multi-server multi-function distributed computing setting, which we detail in the following.
2.1. Datasets, Subfunctions, and Placement into Distributed Servers
There are K datasets in total, each denoted by , . Each distributed server with a capacity of M is assigned a subset of datasets with indices such that , where the assignments possibly overlap.
Each server computes a set of subfunctions for , . Datasets could be dependent (We note that by exploiting the temporal and spatial variation or dependence of data, it is possible to decrease the communication cost.) across , so could . We denote the number of symbols in each by L, which equals the blocklength n. Let denote the set of subfunctions of the i-th server, be the alphabet of , and be the set of subfunctions of all servers. We denote with and , the length n sequences of subfunction , and of assigned to server .
2.2. Cyclic Dataset Placement Model, Computation Capacity, and Recovery Threshold
We assume that the total number of datasets K is divisible by the number of servers N, i.e., . The dataset placement on N distributed servers is conducted in a circular or cyclic manner in the number of circular shifts between two consecutive servers, where the shifts are to the right and the final entries are moved to the first positions, if necessary. As a result of cyclic placement, any subset of servers covers the set of all datasets to compute the requested functions from the user. Given , each server has a storage size or computation cost of , and the amount of dataset overlap between the consecutive servers is .
Hence, the set of indices assigned to server is given as follows:
where , . As a result of (1), the cardinality of the datasets assigned to each server meets the storage capacity constraint M with equality, i.e., , for all .
2.3. User Demands and Structure of the Computation
We address the problem of distributed lossless compression of a set of general multi-variable functions , , requested by the user from the set of servers, where , and the functions are known by the servers and the user. More specifically, the user, from a subset of distributed servers aims to compute in a lossless manner the following length n sequence as n tends to infinity:
where is the function outcome for the l-th realization , given the length n sequence. We note that the representation in (2) is the most general form of a (conceivably non-linear) multi-variate function, which encompasses the special cases of separable functions and linearly separable functions, which we discuss next.
In this work, the user seeks to compute functions that are separable to each dataset. Each demanded function , is a function of subfunctions such that , where is a general function (could be linear or non-linear) of dataset . Hence, using the relation , each demanded function can be written in the following form:
In the special case of linearly separable functions (Special instances of the linearly separable representation of subfunctions given in (4) are linear functions of the datasets and are denoted by .) [39], the demanded functions take the form:
where is the subfunction vector, and the coefficient matrix is known to the master node, servers, and the user. In other words, is a set of linear maps from the subfunctions , where . We do not restrict to linearly separable functions, i.e., it may hold that .
2.4. Communication Cost for the Characteristic-Graph-Based Computing Approach
To compute , each server constructs a characteristic graph, denoted by , for compressing . More specifically, for asymptotic lossless computation of the demanded functions, the server builds the n-th OR power of for compressing to determine the transmitted information. The minimal possible code rate achievable to distinguish the edges of as is given by the Characteristic graph entropy, . For a primer on key graph-theoretic concepts, characteristic-graph-related definitions, and the fundamental compression limits of characteristic graphs, we refer the reader to [76,79,82]. In this work, we solely focus on the characterization of the total communication cost from all servers to the user, i.e., the achievable sum-rate, without accounting for the costs of communication between the master and the servers and of computations performed at the servers and the user.
Each builds a mapping from to a valid coloring of , denoted by . The coloring specifies the color classes of that form independent sets to distinguish the demanded function outcomes. Given an encoding function that models the transmission of server for computing , we denote by the color encoding performed by server for . Hence, the communication rate of server , for a sufficiently large blocklength n, where is the length for the color encoding performed at , is
where the inequality follows from exploiting the achievability of , where is the chromatic entropy of the graph [73,75]. We refer the reader to Appendix A.2 for a detailed description of the notions of chromatic and graph entropies (cf. (A9) and (A10), respectively).
For the multi-server multi-function distributed setup, using the characteristic-graph-based fundamental limit in (5), an achievable sum-rate for asymptotic lossless computation is
We next provide our main results in Section 3.
3. Main Results
In this section, we analyze the multi-server multi-function distributed computing framework exploiting the characteristic-graph-based approach in [75]. In contrast to the previous research attempts in this direction, our solution method is general, and it captures (i) general input statistics or dataset distributions or the skew in data instead of assuming uniform distributions, (ii) correlations across datasets, (iii) any dataset placement model across servers beyond the cyclic [39] or the Maddah–Ali and Niesen [83] placements, and (iv) general function classes requested by the user, instead of focusing on a particular function type (see, e.g., [39,67,84]).
Subsequently, we delve into specific function computation scenarios. First, we present our main result (Theorem 1), which is the most general form that captures (i)–(iv). We then demonstrate (in Proposition 1) that the celebrated result of Wan et al. [Theorem 2] [39] can be obtained as a special case of Theorem 1, given that (i) the datasets are i.i.d. and uniform over q-ary fields, (ii) the placement of datasets across servers is cyclic, and (iii) the demanded functions are linearly separable, given as in (4). Under a correlated and identically distributed Bernoulli dataset model with a skewness parameter for datasets, we next present in Proposition 2 the achievable sum rate for computing Boolean functions. Finally, in Proposition 3, we analyze our characteristic-graph-based approach for evaluating multi-linear functions, a pertinent class of non-linear functions, under the assumption of cyclic placement and i.i.d. Bernoulli-distributed datasets with parameter and derive an upper bound on the sum rate needed. To gain insight into our analytical results and demonstrate the savings in the total communication cost, we provide some numerical examples.
We next present our main theorem (Theorem 1), on the achievable communication cost for the multi-server multi-function topology, which holds for all input statistics under any correlation model across datasets and for the distributed computing of all function classes requested by the user, regardless of the data assignment over the servers’ caches. The key to capturing the structure of general functions in Theorem 1 is the utilization of a characteristic-graph-based compression technique, as proposed by Körner in [75] (For a more detailed description of characteristic graphs and their entropies, see Appendix A.2.).
Theorem 1
(Achievable sum-rate using the characteristic graph approach for general functions and distributions). In the multi-server multi-function distributed computation model, denoted by , under general placement of datasets, and for a set of general functions requested by the user, and under general jointly distributed dataset models, including non-uniform inputs and allowing correlations across datasets, the characteristic-graph-based compression yields the following upper bound on the achievable communication rate:
where
- is the union characteristic graph (We refer the reader to (A12) (Appendix A.2) for the definition of a union of characteristic graphs.) that server builds for computing ,
- denotes a codebook of functions that server uses for computing ,
- each subfunction , is defined over a q-ary field such that the characteristic is at least 2, and
- such that denotes the transmitted information from server .
Proof.
See Appendix B.1. □
Theorem 1 provides a general upper bound on the sum-rate for computing functions for general dataset statistics and correlations and the placement model and allows any function type over a field of characteristic . We note that in (7), the codebook determines the structure of the union characteristic graph , which, in turn, determines the distribution of . Therefore, the tightness of the rate upper bound relies essentially on the codebook selection. We also note that it is possible to analyze the computational complexity of building a characteristic graph and computing the bound in (7) via evaluating the complexity of the transmissions determined by for a given . However, the current manuscript focuses primarily on the cost of communication, and we leave the computational complexity analysis to future work. Because (7) is not analytically tractable, in the following, we focus on special instances of Theorem 1 to gain insights into the effects of input statistics, dataset correlations, and special function classes in determining the total communication cost.
We next demonstrate that the achievable communication cost for the special scenario of the distributed linearly separable computation framework given in [Theorem 2] [39] is embedded by the characterization provided in Theorem 1. We next showcase the achievable sum rate result for linearly separable functions.
Proposition 1
(Achievable sum-rate using the characteristic graph approach for linearly separable functions and i.i.d. subfunctions over ). In the multi-server multi-function distributed computation model, denoted by , under the cyclic placement of datasets, where , and for a set of linearly separable functions, given as in (4), requested by the user, and given i.i.d. uniformly distributed subfunctions over a field of characteristic , the characteristic-graph-based compression yields the following bound on the achievable communication rate:
Proof.
See Appendix B.2. □
We note that Theorem 1 results in Proposition 1 when three conditions hold: (i) the dataset placement across servers is cyclic following the rule in (1), (ii) the subfunctions are i.i.d. and uniform over (see (A21) in Appendix B.2), and (iii) the codebook is restricted to linear combinations of subfunctions , which yields that the independent sets of satisfy a set of linear constraints (We detail these linear constraints in Appendix B.2, where the set of linear equations given in (A22) is used to simplify the entropy of the union characteristic graph via the expression given in (A20) for evaluating the upper bound given in (A18) on the achievable sum rate for computing the desired functions via exploiting the entropies of the union characteristic graphs for each of the servers, given the recovery threshold .) in the variables . Note that the linear encoding and decoding approach for computing linearly separable functions, proposed by Wan et al. in [Theorem 2] [39], is valid over a field of characteristic . However, in Proposition 1, the characteristic of is at least 2, i.e., , generalizing [Theorem 2] [39] to larger input alphabets.
Next, we aim to demonstrate the merits of the characteristic-graph-based compression in capturing dataset correlations within the multi-server multi-function distributed computation framework. More specifically, we restrict the general input statistics in Theorem 1 such that the datasets are correlated and identically distributed, where each subfunction follows a Bernoulli distribution with the same parameter , i.e., , with , and the user demands arbitrary Boolean functions regardless of the data assignment. Similarly to Theorem 1, the following proposition (Proposition 2) holds for general function types (Boolean) regardless of the data assignment.
Proposition 2
(Achievable sum-rate using the characteristic graph approach for general functions and identically distributed subfunctions over ). In the multi-server multi-function distributed computing setting, denoted by , under the general placement of datasets, and for a set of Boolean functions requested by the user, and given identically distributed and correlated subfunctions with , , where , the characteristic-graph-based compression yields the following bound on the achievable communication rate:
where
- denotes a codebook of Boolean functions that server uses,
- such that denotes the transmitted information from server ,
- has two maximal independent sets (MISs), namely, and , yielding and , respectively, and
- the probability that yields the function value is given as
Proof.
See Appendix B.3. □
While, admittedly, the above approach (Proposition 2) may not directly offer sufficient insight, it does employ the new machinery to offer a generality that allows us to plug in any set of parameters to determine the achievable performance.
Contrasting Propositions 1– 2, which give the total communication costs for computing linearly separable and Boolean functions, respectively, over , Proposition 2, by exploiting the skew and correlations of datasets indexed by , as well as the functions’ structures via the MISs and of server , demonstrates that harnessing the correlation across the datasets can indeed reduce the total communication cost versus the setting in Proposition 1, devised with the assumption of i.i.d. and uniformly distributed subfunctions.
The prior works have focused on devising distributed computation frameworks and exploring their communication costs for specific function classes. For instance, in [62], Körner and Marton have restricted the computation to be the binary sum function, and in [72], Han and Kobayashi have classified functions into two categories depending on whether they can be computed at a sum rate that is lower than that of [60]. Furthermore, the computation problem has been studied for specific topologies, e.g., the side information setting in [73,74]. Despite the existing efforts, see, e.g., [62,72,73,74], to the best of our knowledge, for the given multi-server multi-function distributed computing scenario, there is still no general framework for determining the fundamental limits of the total communication cost for computing general non-linear functions. Indeed, for this setting, the most pertinent existing work that applies to general non-linear functions and provides an upper bound on the achievable sum rate is that of Slepian–Wolf [60]. On the other hand, the upper bound on the achievable computation scheme presented in Theorem 1 can provide savings in the communication cost over [60] for functions including linearly separable functions and beyond. To that end, we exploit Theorem 1 to determine an upper bound on the achievable sum-rate for distributed computing of a multi-linear function in the form of
Note that (11) is used in various scenarios, including distributed machine learning, e.g., to reduce variance in noisy datasets via ensemble learning [85] or weighted averaging [86], sensor network applications to aggregate readings for improved data analysis [87], as well as distributed optimization and financial modeling, where these functions play pivotal roles in establishing global objectives and managing risk and return [88,89].
Drawing on the utility of characteristic graphs in capturing the structures of data and functions, as well as input statistics and correlations, and the general result in Theorem 1, our next result, Proposition 3, demonstrates a new upper bound on the achievable sum rate for computing multi-linear functions within the framework of multi-server and multi-function distributed computing via exploiting conditional graph entropies.
Proposition 3
(Achievable sum-rate using the characteristic graph approach for multi-linear functions and i.i.d. subfunctions over ). In a multi-server multi-function distributed computing setting, denoted by , under the cyclic placement of datasets, where , and for computing the multi-linear function (), given as in (11), requested by the user, and given i.i.d. uniformly distributed subfunctions , , for some , the characteristic-graph-based compression yields the following bound on the achievable communication rate:
where
- denotes the probability that the product of M subfunctions, with being i.i.d. across , take the value one, i.e., ,
- the variable denotes the minimum number of servers needed to compute , given as in (11), where each of these servers computes a disjoint product of M subfunctions, and
- the variable represents whether an additional server is needed to aid the computation, and if , then denotes the number of subfunctions to be computed by the additional server, and similarly to the above, .
Proof.
See Appendix B.4. □
We next detail two numerical examples (Section 4.1 and Section 4.2) to showcase the achievable gains in the total communication cost for Proposition 2 and Proposition 3, respectively.
4. Numerical Evaluations to Demonstrate the Achievable Gains
Given , to gain insight into our analytical results and demonstrate the savings in the total communication cost, we provide some numerical examples. To demonstrate Proposition 2, in Section 4.1, we focus on computing linearly separable functions, and in Section 4.2 (cf. Proposition 3), we focus on multi-linear functions, respectively.
To that end, to characterize the performance of our characteristic-graph-based approach for linearly separable functions, we denote by the gain of the sum-rate for the characteristic-graph-based approach given in (9) over the sum-rate of the distributed scheme of Wan et al. in [39], given in (8), and by the gain of the sum-rate in (9) over the sum-rate of the fully distributed approach of Slepian–Wolf [60]. To capture general statistics, i.e., dataset skewness and correlations, and make a fair comparison, we adapt the transmission model of Wan et al. in [39] via modifying the i.i.d. dataset assumption.
We next study an example scenario (Section 4.1) for computing a class of linearly separable functions (4) over , where each of the demanded functions takes the form , under a specific correlation model across subfunctions. More specifically, when the subfunctions are identically distributed and correlated across , and , we model the correlation across datasets (a) exploiting the joint PMF model in [Theorem 1] [90] and (b) for a joint PMF described in Table 2. Furthermore, we assume for that is full rank. For the proposed setting, we next demonstrate the achievable gains of our proposed technique versus for computing (4) as a function of skew, , and correlation, , of datasets, , and other system parameters and showcase the results via Figures 1, 3–5.
Table 2.
Joint PMF of and with a crossover parameter p, in Section 4.1 (Scenario II).
4.1. Example Case: Distributed Computing of Linearly Separable Functions over
We consider the computation of the linearly separable functions given in (4) for general topologies, with general N, K, M, , , over , with an identical skew parameter for each subfunction, where Bern(), , using cyclic placement as in (1) and incorporating the correlation between the subfunctions, with the correlation coefficient denoted by . We consider three scenarios, as described next:
- Scenario I. The number of demanded functions is , where the subfunctions could be uncorrelated or correlated.
This scenario is similar to the setting in [39], although different from [39], which is valid over a field of characteristic , we consider , and in the case of correlations, i.e., when , we capture the correlations across the transmissions (evaluated from subfunctions of datasets) from distributed servers, as detailed earlier in Section 3. We first assume that the subfunctions are not correlated, i.e., , and evaluate for . The parameter of , i.e., the probability that takes the value 1 can be computed using the recursive relation
where is the binomial PMF, and is the probability of the modulo 2 sum of any subfunctions taking the value one, with being i.i.d. across , with the convention .
Given , we denote by the minimum number of servers, corresponding to the subset , needed to compute , where each server, with a cache size of M, computes a sum of M subfunctions, where across these servers, the sets of subfunctions are disjoint. Hence, . Furthermore, the variable represents whether additional servers in addition to servers are needed to aid the computation, and if , then denotes the number of subfunctions to be computed by the set of additional servers, namely, , and similarly to the above, , which is obtained by evaluating at .
Adapting (8) for , we obtain the total communication cost for computing the linearly separable function as
Using Proposition 2 and (13), we derive the sum rate for distributed lossless computing of as
where the indicator function captures the rate contribution from the additional server, if any. Using (15), the gain over the linearly separable solution of [39] is presented as
where represents the rate needed from the set of additional servers , aiding the computation through communicating the sum of the remaining subfunctions in the set , where the summation for these remaining functions in is denoted as , which cannot be captured by the set .
Given for the given modulo 2 sum function, we next incorporate the correlation model in [90] for each , identically distributed with , and correlation across any two subfunctions. The formulation in [90] yields the following PMF for :
where and are indicator functions, where and .
We depict the behavior of our gain, , using the same topology as in [39], with different system parameters , under in Figure 1-(Left). As we increase both N and K, along with the number of active servers, , the gain, , of the characteristic graph approach increases. This stems from the characteristic graph approach to compute functions of using servers. From Figure 1-(Right), it is evident that by capturing correlations between the subfunctions, hence, across the servers’ caches, grows more rapidly until it reaches the maximum of (16), corresponding to , attributed to full correlation (see, Figure 1-(Right)).
Figure 1.
The gain of the characteristic graph approach for in Section 4.1 (Scenario I). (Left) for various distributed topologies. (Right) The correlation model given as (17) for with different values.
What we can also see is that for , the gain rises with the increase in and linearly grows with . As increases, reaching its maximum at , the gain is maximized, yielding the minimum communication cost that can be achieved with our technique. Here, the gain is dictated by the topology and is given as . This linear relation shows that this specific topology can provide a very substantial reduction in the total communication cost, as goes to 1, over the state of the art [39], as shown in Figure 1-(Right) via the purple (solid) curve. Furthermore, one can draw a comparison between the characteristic graph approach and the approach in [60]. Here, we represent the gain as . It is noteworthy that the sum-rate of all servers using the coding approach of Slepian–Wolf [60] is . With , this expression simplifies to , resulting again in a substantial reduction in the communication cost, as we see from in (14) for the same topology of the purple (solid) curve as shown in Figure 1-(Right).
- Scenario II. The number of demanded functions is , where the subfunctions could be uncorrelated or correlated.
To gain insights into the behavior of , we consider an example distributed computation model with , , where the subfunctions are assigned to , , and in a cyclic manner, with , , and with , and .
Given , using the characteristic graph approach for individual servers, an achievable compression scheme, for a given ordering i and j of server transmissions, relies on first compression of the characteristic graph constructed by server that has no side information and then the conditional rate needed for compressing the colors of for any other server via incorporating the side information obtained from server . Thus, contrasting the total communication cost associated with the possible orderings, the minimum total communication cost can be determined (We can generalize (18) to , where, for a given ordering of server transmissions, any consecutive server that transmits sees all previous transmissions as side information and the best ordering that has the minimum total communication cost, i.e., .). The achievable sum rate here takes the form
Focusing on the characteristic graph approach, we illustrate how each server builds its union characteristic graph for simultaneously computing and according to (A12) (as detailed in Appendix A.2.1), in Figure 2. In (18), the first term corresponds to , where is built using the support of and , and the edges are built based on the rule that if for some , which, as we see here, requires two colors. Similarly, server 2 constructs given , where using the support of and , and where determines , and hence, to compute given , any two vertices taking values (Here, and represent two different realizations of the pair of subfunctions and .) and are connected if . Hence, we require two distinct colors for . As a result, the first term yields a sum rate of . Similarly, the second term of (18) captures the impact of , where server 2 builds using the support of and , and is a complete graph to distinguish all possible binary pairs to compute and , requiring 4 different colors. Given , both and are deterministic. Hence, given , has no edges, which means that . As a result, the ordering of server transmission given by the second term of (18) yields the same sum rate of . For this setting, the minimum required rate is , and the configuration captured by the second term provides a lower recovery threshold of versus for the configurations of server transmissions given by the first term (18). The different achieved by these two configurations is also captured by Figure 2.
Figure 2.
Colorings of graphs in Section 4.1 (Scenario II). (Top Left–Right) Characteristic graphs and , respectively. (Bottom Left–Right) The minimum conditional entropy colorings of given and given , respectively.
Alternatively, in the linearly separable approach [39], servers transmit the requested function of the datasets stored in their caches. For distributed computing of and , servers 1 and 2 transmit at rate , for computing , and at rate , for function . As a result, the achievable communication cost is given by . Here, for a fair comparison, we update the model studied in [39] to capture the correlation within each server without accounting for the correlation across the servers.
Under this setting, for , we see that the gain of the characteristic graph approach over the linearly separable solution of [39] for computing and as a function of takes the form
where for follows from the concavity of , which yields the inequality . Furthermore, approaches as (see Figure 3).
Figure 3.
in (19) versus , for distributed computing of and , where , , with , in Section 4.1 (Scenario II).
We next examine the setting where the correlation coefficient is nonzero, using the joint PMF , as depicted in Table 2, of the required subfunctions ( and ) in computing and . This PMF describes the joint PMF corresponding to a binary non-symmetric channel model, where the correlation coefficient between and is , and where . Thus, our gain here compared to the linearly separable encoding and decoding approach of [39] is given as
We consider now the correlation model in Table 2, where coefficient rises in for a fixed p. In Figure 4-(Left), we illustrate the behavior of , given by (20), for computing and for as a function of p and , where for this setting, the correlation coefficient is a decreasing function of p and an increasing function of . We observe from (20) that the gain satisfies for all , which monotonically increases in p—and hence monotonically decreases in due to the relation —as a function of the deviation of from . For , increases in . For example, for then , as depicted by the green (solid) curve. Similarly, given , decreasing results in to exhibit a rising trend, e.g., for then , as shown by the red (dash-dotted) curve. As p approaches one, goes to as tends to zero, which can be derived from (20). We here note that the gains are generally smaller than in the previous set of comparisons, as shown in Figure 3.
Figure 4.
versus , for distributed computing of and , where , , in Section 4.1, using different joint PMF models for (Scenario II). (Left) in (20) for the joint PMF in Table 2 for different values of p. (Right) for the joint PMF in (17) for different values of .
More generally, given a user request consisting of linearly separable functions (i.e., satisfying (4)), and after considering (20) beyond , we see that is at most as approaches one. We next use the joint PMF model used in obtaining (17), where we observe that , to see that the gain takes the form
where , and . For this model, we illustrate versus in Figure 4-(Right) for different values. Evaluating (21), the peak achievable gain is attained when at , yielding and , and hence, a gain , as shown by the purple (solid) curve. On the other hand, for , we observe that , yielding and , and hence, it can be shown that the gain is lower bounded as .
- Scenario III. The number of demanded functions is , and the number of datasets is equal to the number of servers, i.e., , where the subfunctions are uncorrelated.
We now provide an achievable rate comparison between the approach in [39] and our graph-based approach, as summarized by our Proposition 1, which generalizes the result in [Theorem 2] [39] to finite fields with characteristics , for the case of .
Here, to capture dataset skewness and make a fair comparison, we adapt the transmission model of Wan et al. in [39] via modifying the i.i.d. dataset assumption and taking into account the skewness incurred within each server in determining the local computations at each server.
For the linearly separable model in (4), adapted to account for our setting, exploiting the summation , and given in (15), the communication cost for a general number of with is expressed as
In (22), as approaches 0 or 1, then . Subsequently, the achievable communication cost for the characteristic graph model can be determined as
To understand the behavior of , knowing that is a fixed parameter, we need to examine the dynamic component . Exploiting Schur concavity (A real-valued function is Schur concave if holds whenever majorizes , i.e., , for all [91].) for the binary entropy function, which tells us that , we can see that as approaches 0 or 1, then
where the inequality between the left- and right-hand sides becomes loose as a function of M. As a result, as approaches 0 or 1, then , which follows from exploiting (22), (23) and the achievability of the upper bound in (24). We illustrate the upper bound on in Figure 5 and demonstrate the behavior for demanded functions across various topologies with circular dataset placement, namely, for various , i.e., when the amount of circular shift between two consecutive servers is and the cache size is , and for and . We focus only on plotting for , accounting for the symmetry of the entropy function. Therefore, we only plot for . The multiplicative coefficient of determines the growth, which is depicted by the curves.
Figure 5.
in a logarithmic scale versus for demanded functions for various values of , with for different topologies, as detailed in Section 4.1 (Scenario III).
Thus, we see that for a given topology with demanded functions, for , using (24), we see that exponentially grows with term for (Here, we note that the behavior of is symmetric around .), and very substantial reduction in the total communication cost is possible as approaches , as shown in Figure 5 by the blue (solid) curve. The gain over [Theorem 2] [39], , for a given topology, changes proportionally to . The gain over [60], , for linearly scales (Incorporating the dataset skew to Proposition 1 ([Theorem 2] [39]), is simplified to (22), which from (24) can linearly grow in at high skew, explaining the inferior performance of Proposition 1 over [60] as a function of the skew.) with . For instance, the gain for the blue (solid) curve in Figure 5 is .
In general, other functions in , such as bitwise and the multi-linear function (see, e.g., Proposition 3) are more skewed and have lower entropies than linearly separable functions and, hence, are easier to compute. Therefore, the cost given in (23) can serve as an upper bound for the communication costs of those more skewed functions in .
We have here provided insights into the achievable gains in communication cost for several scenarios. We leave the study of for more general topologies and correlation models beyond (17) devised for linearly separable functions, and beyond the joint PMF model in Table 2, as future work.
Proposition 3 illustrates the power of the characteristic graph approach in decreasing the communication cost for distributed computing of multi-linear functions, given as in (11), compared to recovering the local computations using [60]. We denote by the gain of the sum-rate for the graph entropy-based approach given in (12)—using the conditional entropy-based sum-rate expression in (A30)—over the sum-rate of the fully distributed scheme of Slepian–Wolf [60] for computing (11). For the proposed setting, we next showcase the achievable gains of Proposition 3 via an example and showcase the results via Figure 6.
Figure 6.
Gain versus for computing (11), where , , . (Left) The set of parameters N, K, and M are indicated for each configuration. (Right) versus to observe the effect of N for a fixed total cache size and fixed K.
4.2. Distributed Computation of K-Multi-Linear Functions over
We study the behaviors of versus the skewness parameter for computing the multi-linear function given in (11) for i.i.d. uniform , across , and for a given with parameters N, K, , such that , , , and the number of replicates per dataset is . We use Proposition 3 to determine the sum-rate upper bound and illustrate the gains in decibels versus in Figure 6.
From the numerical results in Figure 6 (Left), we observe that the sum-rate gain of the graph entropy-based approach versus the fully distributed approach of [60], , could reach up to more than 10-fold gain in compression rate for uniform and up to -fold for skewed data. The results for showcase that our proposed scheme can guarantee an exponential rate reduction over [60] as a function of decreasing . Furthermore, the sum-rate gains scale linearly with the cache size M, which scales with K given . Note that diminishes with increasing N when M and are kept fixed. In Figure 6 (Right), for , a fixed total cache size , and hence, fixed K, the gain for large N and small M is higher versus small N and large M, demonstrating the power of the graph-based approach as the topology becomes more and more distributed.
5. Conclusions
In this paper, we devised a distributed computation framework for general function classes in multi-server multi-function, single-user topologies. Specifically, we analyzed the upper bounds for the communication cost for computing in such topologies, exploiting Körner’s characteristic graph entropy, by incorporating the structures in the dataset and functions, as well as the dataset correlations. To showcase the achievable gains of our framework and perceive the roles of dataset statistics, correlations, and function classes, we performed several experiments under cyclic dataset placement over a field of characteristic two. Our numerical evaluations for distributed computing of linearly separable functions, as demonstrated in Section 4.1 via three scenarios, indicate that by incorporating dataset correlations and skew, it is possible to achieve a very substantial reduction in the total communication cost over the state of the art. Similarly, for distributed computing of multi-linear functions, in Section 4.2, we demonstrate a very substantial reduction in the total communication cost versus the state of the art. Our main results (Theorem 1 and Propositions 1–3) and observations through the examples help us gain insights into reducing the communication cost of distributed computation by taking into account the structures of datasets (skew and correlations) and functions (characteristic graphs).
The potential directions include providing a tighter achievability result for Theorem 1 and devising a converse bound on the sum-rate. They involve conducting experiments under the scheme of the coded scheme of Maddah–Ali and Niesen detailed in [83] in order to capture the finer-grained granularity of placement that can help tighten the achievable rates. They also involve, beyond the special cases detailed in Propositions 1–3, exploring the achievable gains for a broader set of distributed computation scenarios, e.g., over-the-air computing, cluster computing, coded computing, distributed gradient descent, or more generally, distributed optimization and learning and goal-oriented and semantic communication frameworks, which can be reinforced by compression by capturing the skewness, correlations, and placement of datasets, the structures of functions, and topology.
Author Contributions
Conceptualization, D.M. and P.E.; methodology, P.E. and D.M.; software, D.M. and M.R.D.S.; validation, D.M., M.R.D.S. and B.S.; formal analysis, D.M.; investigation, D.M. and M.R.D.S.; resources, D.M.; data curation (not applicable); writing—original draft preparation, D.M. and M.R.D.S.; writing—review and editing, D.M., B.S. and M.R.D.S.; visualization, D.M. and M.R.D.S.; supervision, D.M. and P.E.; project administration, D.M. and P.E.; funding acquisition, D.M. and P.E. All authors have read and agreed to the published version of the manuscript.
Funding
This research was partially supported by a Huawei France-funded Chair towards Future Wireless Networks and supported by the program “PEPR Networks of the Future” of France 2030. Co-funded by the European Union (ERC, SENSIBILITÉ, 101077361, and ERC-PoC, LIGHT, 101101031). The views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Data is contained within the article.
Acknowledgments
The authors thank Kai Wan at the Huazhong University of Science and Technology, Wuhan, China for interesting discussions.
Conflicts of Interest
The authors declare a conflict of interest with MIT, Northeastern, UT Austin, and Inria Paris research center due to academic relationships. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
| ach | Achievable |
| Bern | Bernoulli |
| G | Graph |
| i.i.d. | Independent and identically distributed |
| lin | Linearly separable encoding |
| MIS | Maximal independent set |
| PMF | Probability mass function |
| SW | Slepian–Wolf encoding |
Appendix A. Technical Preliminary
Here, we detail the notion of characteristic graphs and their entropy in the context of source compression. We recall that the below proofs use the notation given in Section 1.5.
Appendix A.1. Distributed Source Compression, and Communication Cost
Given statistically dependent, finite-alphabet, i.i.d. random sequences where for , the Slepian–Wolf theorem gives a theoretical lower bound for the lossless coding rate of distributed servers in the limit as n goes to infinity. Denoting by the encoding rate of server , the sum-rate (or communication cost) for distributed source compression is given by
where denotes the indices of a subset of servers, its complement, and .
We recall that in the case of distributed source compression, given by the coding theorem of Slepian–Wolf [60], the encoder mappings specify the bin indices for the server sequences . The bin index is such that every bin of each n-vector of server is randomly drawn under the uniform distribution across the set of bins. The transmission of server is , where is the encoding function of onto bins. The total number of symbols in is . This value corresponds to the aggregate number of symbols in the transmitted subfunctions from the server. Hence, the communication cost (rate) of for a sufficiently large n satisfies
where the cost can be further reduced via a more efficient mapping if , are correlated.
Appendix A.2. Characteristic Graphs, Distributed Functional Compression, and Communication Cost
In this section, we provide a summary of key graph-theoretic points devised by Körner [75] and studied by Alon and Orlitsky [73] and Orlitsky and Roche [74] to understand the fundamental limits of distributed computation.
Let us consider the canonical scenario with two servers, storing and , respectively. The user requests a bivariate function that could be linearly separable or in general non-linear. Associated with the source pair is a characteristic graph G, as defined by Witsenhausen [92]. We denote by the characteristic graph that server one builds (server two similarly builds ) for computing (We detail the compression problem for the simultaneous computation of a set of requested functions in Appendix A.2.1.) , determined as a function of , , and F, where and an edge if and only if there exists a such that and . Note that the idea of building can also be generalized to multivariate functions, where for [82]. In this paper, we only consider vertex colorings. A valid coloring of a graph is such that each vertex of is assigned a color (code) such that adjacent vertices receive disjoint colors (codes). Vertices that are not connected can be assigned to the same or different colors. The chromatic number of a graph is the minimum number of colors needed to have a valid coloring of [76,77,79].
Definition A1
(Characteristic graph entropy [73,75]). Given a random variable with characteristic graph for computing function , the entropy of the characteristic graph is expressed as
where is the set of all MISs of , where an MIS is not a subset of any other independent set, where an independent set of a graph is a set of its vertices in which no two vertices are adjacent [93]. Notation means that the minimization is over all distributions such that implies , where is an MIS of .
Similarly, the conditional graph entropy for with characteristic graph for computing , given as side information, is defined in [74] using the notation that indicates a Markov chain:
The Markov chain relation in (A4) implies that [Ch. 2] [94]. In (A4), the goal is to determine the equivalence classes of that have the same function outcome such that . We next consider an example to clarify the distinction between characteristic graph entropy, , and entropy of a conditional characteristic graph, or conditional graph entropy, .
Example A1
(Characteristic graph entropy of ternary random variables [Examples 1–2] [74]). In this example, we first investigate the characteristic graph entropy, , and the conditional graph entropy, .
- 1.
- Let be a uniform PMF over the set . Assume that has only one edge, i.e., . Hence, the set of MISs is given as .To determine the entropy of a characteristic graph, i.e., , from (A3), our objective is to minimize , which is a convex function of . Hence, is minimized when the conditional distribution of is selected as , , and . As a result of this PMF, we have
- 2.
- Let be a uniform PMF over the set and . Note that given the joint PMF. To determine the conditional characteristic graph entropy, i.e., , using (A4), our objective is to minimize , which is convex in . Hence, is minimized when is selected as , and . Hence, we obtainwhich yields, using , that
Definition A2
(Chromatic entropy [73]). The chromatic entropy of a graph is defined as
where the minimization is over the set of colorings such that is a valid coloring of .
Let be the n-th OR power of a graph for the source sequence to compress . In this OR power graph, and , where and similarly for , when there exists at least one coordinate such that . We denote a coloring of by . The encoding function at server one is a mapping from to the colors of the characteristic graph for computing . In other words, specifies the color classes of such that each color class forms an independent set that induces the same function outcome.
Using Definition A2, we can determine the chromatic entropy of graph as
In [75], Körner has shown the relation between the chromatic and graph entropies, which we detail next.
Theorem A1
(Chromatic entropy versus graph entropy [75]). The following relation holds between the characteristic graph entropy and the chromatic entropy of graph in the limit of large n:
Appendix A.2.1. A Characteristic-Graph-Based Encoding Framework for Simultaneously Computing a Set of Functions
The user demands a set of functions that are possibly non-linear in the subfunctions. In our proposed framework, for the distributed computing of these functions, we leverage characteristic graphs that can capture the structure of subfunctions. To determine the achievable rate of distributed lossless functional compression, we determine the colorings of these graphs and evaluate the entropy of such colorings. In the case of functions, let be the characteristic graph that server builds for computing function . The graphs are on the same vertex set.
Union graphs for simultaneously computing a set of functions with side information have been considered in [82], using multi-functional characteristic graphs. A multi-functional characteristic graph is an OR function of individual characteristic graphs for different functions [Definition 45] [82]. To that end, server creates a union of graphs on the same set of vertices with a set of edges , which satisfies
In other words, we need to distinguish the outcomes and of server if there exists at least one function out of functions such that , for some given . The server then compresses the union by exploiting (A9) and (A10).
In the special case when the number of demanded functions is large (or tends to infinity), such that the union of all subspaces spanned by the independent sets of each , is the same as the subspace spanned by , MISs of in (A12) for server become singletons, rendering a complete graph. In this case, the problem boils down to the paradigm of distributed source compression (see Appendix A.1).
Appendix A.2.2. Distributed Functional Compression
The fundamental limit of functional compression has been given by Körner [75]. Given for server , the encoding function specifies MISs given by the valid colorings . Let the number of symbols in be for server . Hence, the communication cost of server i, as is given by (5).
Defining for a given subset chosen to guarantee distributed computation of , i.e., , the sum-rate of servers for distributed lossless functional compression for computing equals
where is the joint graph entropy of , and it is defined as [Definition 30] [82]:
where is the coloring of the n-th power graph that builds for computing [82].
Similarly, exploiting [Definition 31] [82], the conditional graph entropy of the servers is given as
Appendix B. Proofs of Main Results
Appendix B.1. Proof of Theorem 1
Consider the general topology, , under general placement of datasets, and for a set of general functions requested by the user, and under general jointly distributed dataset models, including non-uniform inputs and allowing correlations across datasets.
We note that server builds a characteristic graph (The characteristic-graph-based approach is valid provided that each subfunction , contained in is defined over a q-ary field such that , to ensure that the union graph , (or , each) has more than one vertex.) for distributed lossless computing of , . Similarly, server builds a union characteristic graph for computing . We denote by the union characteristic graph, given as in (A12). In the description of , the set is the support set of , i.e., , and is the union of edges, i.e., , where denotes the set of edges in , which is the characteristic graph the server builds for distributed lossless computing for a given function .
To compute the set of demanded functions , we assume that server can use a codebook of functions denoted by such that , where the user can compute its demanded functions using the set of transmitted information provided from any set of servers. More specifically, server chooses a function to encode . Note that represents, in the context of encoding characteristic graphs, the mapping from to a valid coloring . We denote by the color encoding performed by server for the length n realization of , denoted by . For convenience, we use the following shorthand notation to represent the transmitted information from the server:
Combining the notions of the union graph in (A12) and the encodings of the individual servers given in (A16), the rate needed from server for meeting the user demand is upper bounded by the cost of the best encoding, which minimizes the rate of information transmission from the respective server. Equivalently,
where equality is achievable in (A17). Because the user can recover the desired functions using any set of servers, the achievable sum rate is upper bounded by
Appendix B.2. Proof of Proposition 1
For the multi-server multi-function distributed computing architecture, this proposition restricts the demand to be a set of linearly separable functions, given as in (4). Given the recovery threshold , it holds that
where in (A19), we used the identity . Furthermore, if the codebook is restricted to linear combinations of subfunctions, is given by the following set of linear equations:
In other words, , , is a vector-valued function. Note that each server contributes to determining the set of linearly separable functions of datasets, given as in (4), in a distributed manner. Hence, each independent set , with denoting the set of MISs of , of is captured by the linear functions of , i.e., each is determined by (A22). Hence, the user can recover the requested functions by linearly combining the transmissions of the servers:
In (A20), we use the definition of mutual information, , where given and , it holds under cyclic placement that
and are the coefficients for computing function . In (A21), we used the fact that is uniform over and i.i.d. across and rewrote the conditional entropy expression such that
where follows from the fact that is a function of . For a given and field size q, the relation ensures that has q independent sets where each such set contains different values of . Exploiting the fact that is i.i.d. and uniform over , each element of is uniform over . Hence, the achievable sum-rate is upper bounded by
Exploiting the cyclic placement model, we can tighten the bound in (A26). Note that server can help recover M subfunctions (at most, i.e., M transmissions needed to recover M subfunctions), and each of the servers can help recover an additional subfunctions (at most, i.e., transmissions are needed to recover subfunctions). Hence, the set of servers suffices to provide subfunctions and reconstruct any desired function of . Due to cyclic placement, each is stored in exactly servers. Now, let us consider the following four scenarios:
- (i)
- When , it is sufficient for each server to transmit linearly independent combinations of their subfunctions. This leads to resolving linear combinations of K subfunctions from servers that are sufficient to derive the demanded linear functions. Because , there are unresolved linear combinations of K subfunctions.
- (ii)
- When , it is sufficient for each server to transmit at most linearly independent combinations of their subfunctions. This leads to resolving linear combinations of K subfunctions and unresolved linear combinations of K subfunctions.
- (iii)
- When , each server needs to transmit at a rate where and , which gives the number of linearly independent combinations needed to meet the demand. This yields a sum-rate of . The subset of servers may need to provide up to an additional linear combinations, and defines the maximum number of additional linear combinations per server, i.e., the required number of combinations when .
- (iv)
- When , it is easy to note that since any K linearly independent equation in (A23) suffices to recover , the sum-rate K is achievable.
From (i)–(iv), we obtain the following upper bound on the achievable sum-rate:
where it is easy to note that (A27) matches the communication cost in [Theorem 2] [39]. The i.i.d. distribution assumption for ensures that this result holds for any .
Appendix B.3. Proof of Proposition 2
Similarly as in Theorem 1, we let denote the union characteristic graph that server builds for computing . Note that given , the support set of server has a cardinality of . Because the user demand is a collection of Boolean functions, in this scenario, each server builds a graph with two independent sets at most, denoted by and , yielding the function values and , respectively.
Given the recovery threshold , any subset of servers with stores the set , which is sufficient to compute the demanded functions. Given server , consider the set of all , which satisfies
where notation denotes the dataset values for the set of datasets stored in the subset of servers . Note, in general, that . In the case of cyclic placement based on (1), out of the set of all datasets , there are datasets that belong exclusively to server . In this case, .
Note that (A28) captures the independent set . Equivalently, the set of dataset values that lands in of yields . The transmitted information takes the value with a probability
using which the upper bound on the achievable sum rate can be determined.
Appendix B.4. Proof of Proposition 3
Recall that are i.i.d. across , and each server has a capacity . This means that given the number of datasets K, each server can compute the product of subfunctions and, hence, the minimum number of servers to evaluate the multi-linear function is such that given its capacity , each server can compute the product of a disjoint set of M subfunctions, i.e., , which operates at a rate of , . Exploiting the characteristic graph approach, we build for , with respect to variables and , and similarly for other servers to characterize the sum-rate for the computation by evaluating the entropy of each graph.
To evaluate the first term in (12), we choose a total of servers with a disjoint set of subfunctions. We denote the selected set of servers by , and the collective computation rate of these servers, as a function of the conditional graph entropies of these servers, becomes
where follows from assuming with no loss of generality, and from that the rate of server is positive only when , which is true with probability . Finally, follows from employing the sum of the terms in the geometric series, i.e., (While Proposition 3 uses the conditional graph entropies, the statements of Theorem 1 and Proposition 1, and Proposition 2 do not take into account the notion of conditional graph entropies. However, as indicated in Section 4.1 for computing linearly separable functions, and in Section 4.2 for computing multi-linear functions, respectively, we used the conditional entropy-based sum rate in (A30) to evaluate and illustrate the achievable gains over [39,60].).
In the case of , the product of K subfunctions cannot be determined by servers, and we need additional servers to aid the computation and determine the outcome of by computing the product of the remaining subfunctions. In other words, if and , the -th server determines the outcome of by computing the product of subfunctions , that cannot be captured by the previous servers. Hence, the additional rate, given by the second term in (12), is given by the product of the term
with , and . Combining this rate term with (A30), we prove the statement of the proposition.
References
- Yang, C.; Wu, H.; Huang, Q.; Li, Z.; Li, J. Using spatial principles to optimize distributed computing for enabling the physical science discoveries. Proc. Natl. Acad. Sci. USA 2011, 108, 5498–5503. [Google Scholar] [CrossRef]
- Shamsi, J.; Khojaye, M.A.; Qasmi, M.A. Data-intensive cloud computing: Requirements, expectations, challenges, and solutions. J. Grid Comput. 2013, 11, 281–310. [Google Scholar] [CrossRef]
- Yang, H.; Ding, T.; Yuan, X. Federated Learning With Lossy Distributed Source Coding: Analysis and Optimization. IEEE Trans. Commun. 2023, 71, 4561–4576. [Google Scholar] [CrossRef]
- Gan, G. Evaluation of room air distribution systems using computational fluid dynamics. Energy Build. 1995, 23, 83–93. [Google Scholar] [CrossRef]
- Gao, Y.; Wang, L.; Zhou, J. Cost-efficient and quality of experience-aware provisioning of virtual machines for multiplayer cloud gaming in geographically distributed data centers. IEEE Access 2019, 7, 142574–142585. [Google Scholar] [CrossRef]
- Lushbough, C.; Brendel, V. An overview of the BioExtract Server: A distributed, Web-based system for genomic analysis. In Advances in Computational Biology; Springer: New York, NY, USA, 2010; pp. 361–369. [Google Scholar]
- Dean, J.; Ghemawat, S. MapReduce: Simplified data processing on large clusters. Commun. ACM 2008, 51, 107–113. [Google Scholar] [CrossRef]
- Grolinger, K.; Hayes, M.; Higashino, W.A.; L’Heureux, A.; Allison, D.S.; Capretz, M.A. Challenges for MapReduce in Big Data. In Proceedings of the IEEE World Congress Services, Anchorage, AK, USA, 27 June–2 July 2014; pp. 182–189. [Google Scholar]
- Al-Khasawneh, M.A.; Shamsuddin, S.M.; Hasan, S.; Bakar, A.A. MapReduce a Comprehensive Review. In Proceedings of the International Conference on Smart Computing and Electronic Enterprise (ICSCEE), Shah Alam, Malaysia, 11–12 July 2018; pp. 1–6. [Google Scholar]
- Zaharia, M.; Chowdhury, M.; Franklin, M.J.; Shenker, S.; Stoica, I. Spark: Cluster computing with working sets. In Proceedings of the USENIX Workshop on Hot Topics in Cloud Computing, Boston, MA, USA, 22 June 2010. [Google Scholar]
- Khumoyun, A.; Cui, Y.; Hanku, L. Spark based distributed deep learning framework for big data applications. In Proceedings of the International Conference on Information Science and Communications Technologies (ICISCT), Tashkent, Uzbekistan, 2–4 November 2016; pp. 1–5. [Google Scholar]
- Orgerie, A.C.; Assuncao, M.D.d.; Lefevre, L. A survey on techniques for improving the energy efficiency of large-scale distributed systems. ACM Comput. Surv. 2014, 46, 1–31. [Google Scholar] [CrossRef]
- Keralapura, R.; Cormode, G.; Ramamirtham, J. Communication-Efficient Distributed Monitoring of Thresholded Counts. In Proceedings of the ACM SIGMOD International Conference on Management of Data, New York, NY, USA, 27–29 June 2006; pp. 289–300. [Google Scholar]
- Li, W.; Chen, Z.; Wang, Z.; Jafar, S.A.; Jafarkhani, H. Flexible constructions for distributed matrix multiplication. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Melbourne, Australia, 12–20 July 2021; pp. 1576–1581. [Google Scholar]
- Liu, Y.; Yu, F.R.; Li, X.; Ji, H.; Leung, V.C. Distributed resource allocation and computation offloading in fog and cloud networks with non-orthogonal multiple access. IEEE Trans. Veh. Tech. 2018, 67, 12137–12151. [Google Scholar] [CrossRef]
- Noormohammadpour, M.; Raghavendra, C.S. Datacenter traffic control: Understanding techniques and tradeoffs. IEEE Commun. Surv. Tutor. 2017, 20, 1492–1525. [Google Scholar] [CrossRef]
- Shivaratri, N.; Krueger, P.; Singhal, M. Load distributing for locally distributed systems. Computer 1992, 25, 33–44. [Google Scholar] [CrossRef]
- Bestavros, A. Demand-based document dissemination to reduce traffic and balance load in distributed information systems. In Proceedings of the IEEE Symposium on Parallel and Distributed Processing, San Antonio, TX, USA, 25–28 October 1995; pp. 338–345. [Google Scholar]
- Reisizadeh, A.; Prakash, S.; Pedarsani, R.; Avestimehr, A.S. Tree Gradient Coding. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 2808–2812. [Google Scholar]
- Ozfatura, E.; Gündüz, D.; Ulukus, S. Gradient Coding with Clustering and Multi-Message Communication. In Proceedings of the IEEE Data Science Workshop, Minneapolis, MN, USA, 2–7 June 2019; pp. 42–46. [Google Scholar]
- Tandon, R.; Lei, Q.; Dimakis, A.G.; Karampatziakis, N. Gradient Coding: Avoiding Stragglers in Distributed Learning. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 31 July–3 August 2017. [Google Scholar]
- Ye, M.; Abbe, E. Communication-computation efficient gradient coding. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; pp. 5610–5619. [Google Scholar]
- Halbawi, W.; Azizan, N.; Salehi, F.; Hassibi, B. Improving Distributed Gradient Descent Using Reed-Solomon Codes. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 2027–2031. [Google Scholar]
- Maddah-Ali, M.A.; Niesen, U. Fundamental limits of caching. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Istanbul, Türkiye, 7–12 July 2013; pp. 1077–1081. [Google Scholar]
- Karamchandani, N.; Niesen, U.; Maddah-Ali, M.A.; Diggavi, S.N. Hierarchical coded caching. IEEE Trans. Info Theory 2016, 62, 3212–3229. [Google Scholar] [CrossRef]
- Li, S.; Supittayapornpong, S.; Maddah-Ali, M.A.; Avestimehr, S. Coded TeraSort. In Proceedings of the IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Lake Buena Vista, FL, USA, 29 May–2 June 2017. [Google Scholar]
- Li, S.; Maddah-Ali, M.A.; Yu, Q.; Avestimehr, A.S. A fundamental tradeoff between computation and communication in distributed computing. IEEE Trans. Inf. Theory 2017, 64, 109–128. [Google Scholar] [CrossRef]
- Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. The exact rate-memory tradeoff for caching with uncoded prefetching. IEEE Trans. Inf. Theory 2018, 64, 1281–1296. [Google Scholar] [CrossRef]
- Naderializadeh, N.; Maddah-Ali, M.A.; Avestimehr, A.S. Fundamental limits of cache-aided interference management. IEEE Trans. Inf. Theory 2017, 63, 3092–3107. [Google Scholar] [CrossRef]
- Subramaniam, A.M.; Heidarzadeh, A.; Narayanan, K.R. Collaborative decoding of polynomial codes for distributed computation. In Proceedings of the IEEE Information Theory Workshop (ITW), Visby, Sweden, 25–28 August 2019; pp. 1–5. [Google Scholar]
- Dutta, S.; Fahim, M.; Haddadpour, F.; Jeong, H.; Cadambe, V.; Grover, P. On the optimal recovery threshold of coded matrix multiplication. IEEE Trans. Inf. Theory 2019, 66, 278–301. [Google Scholar] [CrossRef]
- Yosibash, R.; Zamir, R. Frame codes for distributed coded computation. In Proceedings of the International Symposium on Topics in Coding, Montreal, QC, Canada, 18–21 August 2021; pp. 1–5. [Google Scholar]
- Dimakis, A.G.; Godfrey, P.B.; Wu, Y.; Wainwright, M.J.; Ramchandran, K. Network coding for distributed storage systems. IEEE Trans. Inf. Theory 2010, 56, 4539–4551. [Google Scholar] [CrossRef]
- Wan, K.; Sun, H.; Ji, M.; Tuninetti, D.; Caire, G. Cache-aided matrix multiplication retrieval. IEEE Trans. Inf. Theory 2022, 68, 4301–4319. [Google Scholar] [CrossRef]
- Jia, Z.; Jafar, S.A. On the capacity of secure distributed batch matrix multiplication. IEEE Trans. Inf. Theory 2021, 67, 7420–7437. [Google Scholar] [CrossRef]
- Soleymani, M.; Mahdavifar, H.; Avestimehr, A.S. Analog lagrange coded computing. IEEE J. Sel. Areas Inf. Theory 2021, 2, 283–295. [Google Scholar] [CrossRef]
- Yu, Q.; Maddah-Ali, M.A.; Avestimehr, S. Polynomial codes: An optimal design for high-dimensional coded matrix multiplication. In Proceedings of the International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4403–4413. [Google Scholar]
- López, H.H.; Matthews, G.L.; Valvo, D. Secure MatDot codes: A secure, distributed matrix multiplication scheme. In Proceedings of the IEEE Information Theory Workshop (ITW), Mumbai, India, 6–9 November 2022; pp. 149–154. [Google Scholar]
- Wan, K.; Sun, H.; Ji, M.; Caire, G. Distributed linearly separable computation. IEEE Trans. Inf. Theory 2021, 68, 1259–1278. [Google Scholar] [CrossRef]
- Zhu, J.; Li, S.; Li, J. Information-theoretically private matrix multiplication from MDS-coded storage. IEEE Trans. Inf. Forensics Secur. 2023, 18, 1680–1695. [Google Scholar] [CrossRef]
- Das, A.B.; Ramamoorthy, A.; Vaswani, N. Efficient and Robust Distributed Matrix Computations via Convolutional Coding. IEEE Trans. Inf. Theory. 2021, 67, 6266–6282. [Google Scholar] [CrossRef]
- Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. Straggler Mitigation in Distributed Matrix Multiplication: Fundamental Limits and Optimal Coding. IEEE Trans. Inf. Theory. 2020, 66, 1920–1933. [Google Scholar] [CrossRef]
- Fawzi, A.; Balog, M.; Huang, A.; Hubert, T.; Romera-Paredes, B.; Barekatain, M.; Novikov, A.; R Ruiz, F.J.; Schrittwieser, J.; Swirszcz, G.; et al. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 2022, 610, 47–53. [Google Scholar] [CrossRef] [PubMed]
- Aliasgari, M.; Simeone, O.; Kliewer, J. Private and secure distributed matrix multiplication with flexible communication load. IEEE Trans. Inf. Forensics Secur. 2020, 15, 2722–2734. [Google Scholar] [CrossRef]
- D’Oliveira, R.G.; El Rouayheb, S.; Heinlein, D.; Karpuk, D. Notes on communication and computation in secure distributed matrix multiplication. In Proceedings of the IEEE Conference on Communications and Network Security, Virtual, 29 June–1 July 2020; pp. 1–6. [Google Scholar]
- Rashmi, K.V.; Shah, N.B.; Kumar, P.V. Optimal exact-regenerating codes for distributed storage at the MSR and MBR points via a product-matrix construction. IEEE Trans. Inf. Theory 2011, 57, 5227–5239. [Google Scholar] [CrossRef]
- Cancès, E.; Friesecke, G. Density Functional Theory: Modeling, Mathematical Analysis, Computational Methods, and Applications, 1st ed.; Springer Nature: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
- Hanna, O.A.; Ezzeldin, Y.H.; Sadjadpour, T.; Fragouli, C.; Diggavi, S. On distributed quantization for classification. IEEE J. Sel. Areas Inf. Theory 2020, 1, 237–249. [Google Scholar] [CrossRef]
- Luo, P.; Xiong, H.; Lü, K.; Shi, Z. Distributed classification in peer-to-peer networks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Jose, CA, USA, 12–15 August 2007; pp. 968–976. [Google Scholar]
- Karakus, C.; Sun, Y.; Diggavi, S.; Yin, W. Straggler mitigation in distributed optimization through data encoding. In Proceedings of the International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5434–5442. [Google Scholar]
- Jia, Z.; Jafar, S.A. Cross subspace alignment codes for coded distributed batch computation. IEEE Trans. Inf. Theory 2021, 67, 2821–2846. [Google Scholar] [CrossRef]
- Wang, J.; Jia, Z.; Jafar, S.A. Price of Precision in Coded Distributed Matrix Multiplication: A Dimensional Analysis. In Proceedings of the IEEE Information Theory Workshop (ITW), Kanazawa, Japan, 17–21 October 2021; pp. 1–6. [Google Scholar]
- Chang, W.T.; Tandon, R. On the capacity of secure distributed matrix multiplication. In Proceedings of the IEEE Global Communications Conference, Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
- Monagan, M.; Pearce, R. Parallel sparse polynomial multiplication using heaps. In Proceedings of the International Symposium on Symbolic and Algebraic Computation, Seoul, Republic of Korea, 28–31 July 2009; pp. 263–270. [Google Scholar]
- Hsu, C.D.; Jeong, H.; Pappas, G.J.; Chaudhari, P. Scalable reinforcement learning policies for multi-agent control. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Prague, Czech Republic, 27 September 27–1 October 2021; pp. 4785–4791. [Google Scholar]
- Goldenbaum, M.; Boche, H.; Stańczak, S. Nomographic functions: Efficient computation in clustered Gaussian sensor networks. IEEE Trans. Wirel. Commun. 2014, 14, 2093–2105. [Google Scholar] [CrossRef]
- Goldenbaum, M.; Boche, H.; Stańczak, S. Harnessing interference for analog function computation in wireless sensor networks. IEEE Trans. Signal Process. 2013, 61, 4893–4906. [Google Scholar] [CrossRef]
- Huang, W.; Wan, K.; Sun, H.; Ji, M.; Qiu, R.C.; Caire, G. Fundamental Limits of Distributed Linearly Separable Computation under Cyclic Assignment. In Proceedings of the IEEE International Symposium on Information Theory (ISIT’23), Taipei, Taiwan, 25–30 June 2023. [Google Scholar]
- Wan, K.; Sun, H.; Ji, M.; Caire, G. On Secure Distributed Linearly Separable Computation. IEEE J. Sel. Areas Commun. 2022, 40, 912–926. [Google Scholar] [CrossRef]
- Slepian, D.; Wolf, J.K. Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19, 471–480. [Google Scholar] [CrossRef]
- Cover, T. A proof of the data compression theorem of Slepian and Wolf for ergodic sources. IEEE Trans. Inf. Theory 1975, 21, 226–228. [Google Scholar] [CrossRef]
- Korner, J.; Marton, K. How to encode the modulo-two sum of binary sources. IEEE Trans. Inf. Theory 1979, 25, 219–221. [Google Scholar] [CrossRef]
- Lalitha, V.; Prakash, N.; Vinodh, K.; Kumar, P.V.; Pradhan, S.S. Linear coding schemes for the distributed computation of subspaces. IEEE J. Sel. Areas Commun. 2013, 31, 678–690. [Google Scholar] [CrossRef]
- Yamamoto, H. Wyner-Ziv theory for a general function of the correlated sources. IEEE Trans. Inf. Theory 1982, 28, 803–807. [Google Scholar] [CrossRef]
- Wyner, A.; Ziv, J. The rate-distortion function for source coding with side information at the decoder. IEEE Trans. Inf. Theoy 1976, 22, 1–10. [Google Scholar] [CrossRef]
- Wan, K.; Sun, H.; Ji, M.; Tuninetti, D.; Caire, G. Cache-Aided General Linear Function Retrieval. Entropy 2020, 23, 25. [Google Scholar] [CrossRef]
- Khalesi, A.; Elia, P. Multi-User Linearly-Separable Distributed Computing. IEEE. Trans. Inf. Theory 2023, 69, 6314–6339. [Google Scholar] [CrossRef]
- Wan, K.; Sun, H.; Ji, M.; Caire, G. On the Tradeoff Between Computation and Communication Costs for Distributed Linearly Separable Computation. IEEE Trans. Commun. 2021, 69, 7390–7405. [Google Scholar] [CrossRef]
- Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine learning for medical imaging. Radiographics 2017, 37, 505–515. [Google Scholar] [CrossRef] [PubMed]
- Correa, N.M.; Adali, T.; Li, Y.O.; Calhoun, V.D. Canonical Correlation Analysis for Data Fusion and Group Inferences. IEEE Signal Process. Mag. 2010, 27, 39–50. [Google Scholar] [CrossRef] [PubMed]
- Kant, G.; Sangwan, K.S. Predictive modeling for power consumption in machining using artificial intelligence techniques. Procedia CIRP 2015, 26, 403–407. [Google Scholar] [CrossRef]
- Han, T.; Kobayashi, K. A dichotomy of functions F(X, Y) of correlated sources (X, Y). IEEE Trans. Inf. Theory 1987, 33, 69–76. [Google Scholar] [CrossRef]
- Alon, N.; Orlitsky, A. Source coding and graph entropies. IEEE Trans. Inf. Theory 1996, 42, 1329–1339. [Google Scholar] [CrossRef]
- Orlitsky, A.; Roche, J.R. Coding for computing. IEEE Trans. Inf. Theory 2001, 47, 903–917. [Google Scholar] [CrossRef]
- Körner, J. Coding of an information source having ambiguous alphabet and the entropy of graphs. In Proceedings of the Prague Conference on Information Theory, Prague, Czech Republic, 19–25 September 1973. [Google Scholar]
- Malak, D. Fractional Graph Coloring for Functional Compression with Side Information. In Proceedings of the IEEE Information Theory Workshop (ITW), Mumbai, India, 6–9 November 2022. [Google Scholar]
- Malak, D. Weighted graph coloring for quantized computing. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Taipei, Taiwan, 25–30 June 2023; pp. 2290–2295. [Google Scholar]
- Charpenay, N.; Le Treust, M.; Roumy, A. Complementary Graph Entropy, AND Product, and Disjoint Union of Graphs. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Taipei, Taiwan, 25–30 June 2023; pp. 2446–2451. [Google Scholar]
- Deylam Salehi, M.R.; Malak, D. An Achievable Low Complexity Encoding Scheme for Coloring Cyclic Graphs. In Proceedings of the Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 26–29 September 2023; pp. 1–8. [Google Scholar]
- Maugey, T.; Rizkallah, M.; Mahmoudian Bidgoli, N.; Roumy, A.; Guillemot, C. Graph Spectral 3D Image Compression. In Graph Spectral Image Processing; Wiley: Hoboken, NJ, USA, 2021; pp. 105–128. [Google Scholar]
- Sevilla, J.L.; Segura, V.; Podhorski, A.; Guruceaga, E.; Mato, J.M.; Martinez-Cruz, L.A.; Corrales, F.J.; Rubio, A. Correlation between gene expression and GO semantic similarity. IEEE/ACM Trans. Comput. Biol. Bioinf. 2005, 2, 330–338. [Google Scholar] [CrossRef] [PubMed]
- Feizi, S.; Médard, M. On network functional compression. IEEE Trans. Inf. Theory 2014, 60, 5387–5401. [Google Scholar] [CrossRef]
- Maddah-Ali, M.A.; Niesen, U. Fundamental limits of caching. IEEE Trans. Inf. Theory 2014, 60, 2856–2867. [Google Scholar] [CrossRef]
- Mosk-Aoyama, D.; Shah, D. Fast Distributed Algorithms for Computing Separable Functions. IEEE. Trans. Info. Theory 2008, 54, 2997–3007. [Google Scholar] [CrossRef][Green Version]
- Kaur, G. A comparison of two hybrid ensemble techniques for network anomaly detection in spark distributed environment. J. Inform. Secur. Appl. 2020, 55, 102601. [Google Scholar] [CrossRef]
- Chen, J.; Li, J.; Huang, R.; Yue, K.; Chen, Z.; Li, W. Federated learning for bearing fault diagnosis with dynamic weighted averaging. In Proceedings of the International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence, Nanjing, China, 21–23 October 2021; pp. 1–6. [Google Scholar]
- Zhao, J.; Govindan, R.; Estrin, D. Computing aggregates for monitoring wireless sensor networks. In Proceedings of the IEEE International Workshop on Sensor Network Protocols and Applications, Anchorage, AK, USA, 1 January 2003; pp. 139–148. [Google Scholar]
- Giselsson, P.; Rantzer, A. Large-Scale and Distributed Optimization: An Introduction, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2018; Volume 2227. [Google Scholar]
- Kavadias, S.; Chao, R.O. Resource Allocation and New Product Development Portfolio Management, 1st ed.; Elsevier: Amsterdam, The Netherlands; Butterworth-Heinemann: Oxford, UK, 2007; pp. 135–163. [Google Scholar]
- Diniz, C.A.R.; Tutia, M.H.; Leite, J.G. Bayesian analysis of a correlated binomial model. Braz. J. Probab. Stat. 2010, 24, 68–77. [Google Scholar] [CrossRef]
- Boland, P.J.; Proschan, F.; Tong, Y. Some majorization inequalities for functions of exchangeable random variables. Lect. Not.-Mono. Ser. 1990, 85–91. [Google Scholar]
- Witsenhausen, H. The zero-error side information problem and chromatic numbers (corresp.). IEEE Trans. Inf. Theory 1976, 22, 592–593. [Google Scholar] [CrossRef]
- Moon, J.W.; Moser, L. On cliques in graphs. Israel J. Math. 1965, 3, 23–28. [Google Scholar] [CrossRef]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1991. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).