Next Article in Journal
Quantum Representations and Scaling Up Algorithms of Adaptive Sampled-Data in Log-Polar Coordinates
Previous Article in Journal
A Novel Extension of the Technique for Order Preference by Similarity to Ideal Solution Method with Objective Criteria Weights for Group Decision Making with Interval Numbers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Expand-and-Randomize: An Algebraic Approach to Secure Computation

Department of Electrical Engineering, University of North Texas, Denton, TX 76203, USA
*
Authors to whom correspondence should be addressed.
Entropy 2021, 23(11), 1461; https://doi.org/10.3390/e23111461
Submission received: 25 September 2021 / Revised: 31 October 2021 / Accepted: 2 November 2021 / Published: 4 November 2021
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
We consider the secure computation problem in a minimal model, where Alice and Bob each holds an input and wish to securely compute a function of their inputs at Carol without revealing any additional information about the inputs. For this minimal secure computation problem, we propose a novel coding scheme built from two steps. First, the function to be computed is expanded such that it can be recovered while additional information might be leaked. Second, a randomization step is applied to the expanded function such that the leaked information is protected. We implement this expand-and-randomize coding scheme with two algebraic structures—the finite field and the modulo ring of integers, where the expansion step is realized with the addition operation and the randomization step is realized with the multiplication operation over the respective algebraic structures.

1. Introduction

Cryptographic primitives are canonical and representative problems that capture the key challenges in understanding the fundamentals of security and privacy, and are essential building blocks for more sophisticated systems and protocols. There is much recent interest in using information theoretic tools to tackle classical cryptographic primitives [1,2,3,4,5,6,7]. Along this line, the focus of this work is on a widely studied primitive in cryptography: secure (multiparty) computation [8].
Secure computation refers to the problem where a number of users wish to securely compute a function on their inputs without revealing any unnecessary information. Interestingly, challenging as it seems, secure computation is always feasible, i.e., with at least three users, any function can be computed securely in the information theoretic sense [9,10]. We consider the most basic model with honest-but-curious users and no colluding users. This work only considers this basic model, but we note that many other variants have been considered in the literature, which may or may not be feasible depending on the specific assumptions and system parameters of the variant (see, e.g., in [11]). However, what is largely open is how to perform secure computation optimally, i.e., efficient secure computation solutions are not known for most cases [6].
The main motivation of this work is to make progress towards constructing efficient secure computation codes. Towards this end, we focus on a minimal model of secure computation, introduced by Feige, Kilian, and Naor in 1994 [12]. In this model (see Figure 1), there are three users: Alice, Bob, and Carol. Alice and Bob have inputs W 1 and W 2 , respectively, and wish to compute a function f ( W 1 , W 2 ) at Carol without revealing any additional information about their inputs beyond what is revealed by the function itself. To do so, Alice and Bob share a common random variable Z that is independent of the inputs and send codewords X 1 and X 2 to Carol, respectively. From X 1 , X 2 , Carol can recover f ( W 1 , W 2 ) and conditioned on f ( W 1 , W 2 ) , the codewords X 1 and X 2 are independent of W 1 , W 2 so that no additional information is leaked.
The key feature of this formulation is that the communication protocol consists of only one codeword from each party that holds the input (thus non-interactive), while for the general secure computation formulation [9,10], interactive protocols are allowed and typically used. Elemental as it seems, this minimal secure computation problem preserves most challenging features of general secure computation; in particular, feasibility results remain strong and optimality results remain weak, i.e., any function f can be computed securely while the construction of efficient codes remains open in general [12,13]. In this work, we focus exclusively on the original three-party formulation of minimal secure computation [12], but note that many interesting variants have been studied (sometimes under different names to highlight different assumptions) in the literature, e.g., more than three parties [14,15,16], colluding parties [17,18,19,20], other security notions [21], and unresponsive parties [22].
The main contribution of this work is a novel coding scheme that relies on algebraic structures to ensure correctness and security. To illustrate the idea of our coding scheme, let us first consider an example. Suppose Alice and Bob each holds a ternary input, W 1 , W 2 { 0 , 1 , 2 } , and wish to compute if W 1 is equal to W 2 , i.e., f ( W 1 , W 2 ) = Yes if W 1 = W 2 , and f ( W 1 , W 2 ) = No otherwise.
As the equality function may not be easily computed in a secure manner, we first expand it to a linear function so that it becomes simpler to deal with. As shown in Figure 2, we use the linear function W 1 W 2 over the finite field F 3 (equivalent to operations modulo 3). For this expansion, we require that the original function can be fully recovered by the expanded function. This is easily verified for this example, where W 1 W 2 0 if and only if W 1 is not equal to W 2 . This expansion step does not solve the secure computation problem because additional information may be leaked. For example, here Carol should only know if W 1 W 2 = 0 and is not supposed to learn whether W 1 W 2 is 1 or 2. To prevent this leakage, we invoke another step of randomization so that the leaked information by the expanded function becomes confusable and thus protected. For this equality function example, when W 1 W 2 0 , we wish to make the result equally likely to be 1 or 2. This is realized by multiplying W 1 W 2 with γ , where γ is uniform over { 1 , 2 } . The multiplication operation is also over F 3 . Thus,
when   W 1 W 2 = 1 , γ × ( W 1 W 2 ) = γ   is   equally   likely   to   be   1   or   2 ;
when   W 1 W 2 = 2 , γ × ( W 1 W 2 ) = γ × 2   is   equally   likely   to   be   1   or   2 .
Note that 2 × 2 = 4 = 1 over F 3 . After this randomization step, the randomized expanded function does not reveal any additional information beyond the original equality function. The above expand-and-randomize procedure can be easily converted to a distributed secure computation protocol. In particular, Alice and Bob share a common random variable, Z = ( γ , z ) , where γ and z are independent, γ is uniform over { 1 , 2 } , and z is uniform over { 0 , 1 , 2 } . The codewords X 1 , X 2 sent by Alice and Bob to Carol are
X 1 = γ × W 1 + z ,
X 2 = γ × W 2 + z .
To decode f ( W 1 , W 2 ) with no error, Carol subtracts X 2 from X 1 , X 1 X 2 = γ × ( W 1 W 2 ) and claims that W 1 is equal to W 2 if and only if X 1 X 2 = 0 . To see why perfect security holds, note that ( X 1 , X 2 ) is invertible to ( X 1 X 2 , X 2 ) ; both X 1 X 2 and X 2 (protected by an independent uniform noise z) do not leak any information. Specifically, the joint distributions of ( X 1 , X 2 ) remain the same for all ( W 1 , W 2 ) pairs so that f ( W 1 , W 2 ) are the same. That is, when W 1 is equal to W 2 , i.e., ( W 1 , W 2 ) { ( 0 , 0 ) , ( 1 , 1 ) , ( 2 , 2 ) } , ( X 1 , X 2 ) are identically distributed ( X 1 is uniform over { 0 , 1 , 2 } and X 2 is equal to X 1 ) and the same observation holds for all ( W 1 , W 2 ) pairs where W 1 is not equal to W 2 ( X 1 X 2 is uniform over { 1 , 2 } ; X 2 is independent of X 1 X 2 and is uniform over { 0 , 1 , 2 } ). Interestingly, this secure computation code is also communication optimal, i.e., the size of X 1 and X 2 must be no less than log 2 3 bits each (required even if there is no security constraint).
A closer inspection of the above scheme reveals that the key is to find an expanded function such that the expanded function outputs corresponding to the same original function output can be randomized to be fully confusable. The first main result of this work is to characterize the structural properties of such confusable sets over the finite field F q , where q is a prime power. The confusable function outputs turn out to be characterized by the property that their discrete logarithms (in exponential representation of the finite field elements) have the same remainder in modular arithmetic. Details will be presented in Section 3.1.
As it turns out, the expand-and-randomize coding scheme is not limited to the finite field. As our second main result, we implement it over the ring of integers modulo n, Z n = { 0 , 1 , , n 1 } . The ring is equipped with two operations—addition and multiplication—both defined in modulo n arithmetic. Let us consider an example to illustrate how Z n is used. Consider the selected-switch function in Figure 3. Alice has a binary input, W 1 { 0 , 1 } . Bob has a ternary input, W 2 { 0 , 1 , 2 } . When W 1 W 2 , the switch function f is OFF and the output is 0 (we may think that the output is not connected to the input, so it is a constant). When W 1 < W 2 , the switch function f is ON and the output is equal to the input vector (all information about W 1 , W 2 goes through).
Following the expand-and-randomize coding paradigm, we first expand the original function to the addition function over Z 6 such that it can be fully recovered. Note that to facilitate the construction of the expanded function, here we perform an invertible transformation on the inputs, W 1 W 1 ˜ ( 0 4 , 1 2 ) , W 2 W 2 ˜ ( 0 0 , 1 2 , 2 5 ) . The expanded function reveals more information than allowed when the output is 2 or 4. To protect this information, a randomization step is realized by multiplying γ , which is uniform over { 1 , 5 } . Now, 2 × { 1 , 5 } = { 2 , 10 } = { 2 , 4 } modulo 6, and 4 × { 1 , 5 } = { 4 , 20 } = { 2 , 4 } modulo 6. Therefore, the expanded function after randomization can be used to produce the following secure computation protocol. The codewords are X 1 = γ × W 1 ˜ + z , X 2 = γ × W 2 ˜ z , where Z = ( γ , z ) , γ and z are independent, γ is uniform over { 1 , 5 } , and z is uniform over { 0 , 1 , 2 , 3 , 4 , 5 } . To decode, Carol will compute X 1 + X 2 = γ × ( W ˜ 1 + W ˜ 2 ) . Comparing the original function f and the randomized expanded function γ × ( W 1 ˜ + W 2 ˜ ) , it is easy to construct the decoding rule based on X 1 + X 2 (see Figure 3). Following a straightforward argument as presented above, we may show that the correctness and security constraints are satisfied. Details will be presented in Theorem 1.
From this example, we find that the crux of the scheme is a partition of the elements of Z 6 into several disjoint confusable sets such that when any two elements of a confusable set S are multiplied with γ which is uniform over a carefully chosen set ( γ is referred to as the randomizer), they will produce identically distributed sets of values; specifically, both will produce the confusable set S .
Z 6 = { 0 } { 1 , 5 } { 2 , 4 } { 3 } , γ is   uniform   over { 1 , 5 } ; over Z 6 : 1 × { 1 , 5 } = 5 × { 1 , 5 } = { 1 , 5 } , 2 × { 1 , 5 } = 4 × { 1 , 5 } = { 2 , 4 } , 3 × { 1 , 5 } = { 3 } .
The main technical challenge is to understand which sets of elements can serve as the randomizer γ and how the ring Z n is partitioned into disjoint confusable sets such that security is guaranteed. For this purpose, we require a few notions from group theory and number theory. Details are presented in Section 3.2. To get a glimpse, consider the above example (see (5)), where the randomizer γ is from the set of integers that are coprime with 6 (1 and 5 both have no common divisor with 6), and the confusable sets are the sets of integers that have the same greatest common divisor with 6 (e.g., gcd ( 2 , 6 ) = gcd ( 4 , 6 ) = 2 ).
Our proposed coding scheme is inspired by two examples (binary logical AND function and ternary comparison function) presented in Appendix A and Appendix B of the original minimal secure computation paper [12], where modular arithmetic over a prime number p is used. Note that for a prime p, the algebraic operations (addition and multiplication) in both finite field F p and the ring of integers modulo p, Z p are modular arithmetic. Along this line, our work can be viewed as a generalization of the examples from in [12] to a general class of achievable schemes that distill the underlying algebraic structure and work over finite fields and modulo rings of integers with general (non-prime) cardinality.

2. Problem Statement

Consider a pair of inputs ( W 1 , W 2 ) { 0 , 1 , , m 1 1 } × { 0 , 1 , , m 2 1 } and a function f : { 0 , 1 , , m 1 1 } × { 0 , 1 , , m 2 1 } { 0 , 1 , , | f | 1 } . We assume the function f is discrete, and use | f | to denote the cardinality of the range of f. W 1 is available to Alice and W 2 is available to Bob. Alice and Bob also both hold a common random variable Z whose distribution does not depend on W 1 , W 2 .
Alice and Bob wish to compute f ( W 1 , W 2 ) securely. To this end, Alice sends a codeword X 1 and Bob sends a codeword X 2 to Carol. X 1 is a function of W 1 and Z, and has L 1 bits. X 2 is a function of W 2 and Z, and has L 2 bits. The function f is known to Alice, Bob, and Carol. As our proposed code will have a fixed length, here we only define fixed-length codes, i.e., L 1 does not depend on the value of W 1 . In general, variable-length codes might have a lower expected length (see Remark 3).
From X 1 , X 2 , Carol can recover f ( W 1 , W 2 ) with no error. This is referred to as the correctness constraint. To ensure Carol does not learn anything beyond f ( W 1 , W 2 ) , the following security constraint must be satisfied.
( Security ) For   any   joint   distribution   of   ( W 1 , W 2 ) , I ( X 1 , X 2 ; W 1 , W 2 | f ( W 1 , W 2 ) ) = 0 .
Equivalently, the security constraint can be stated as follows.
For   any   ( W 1 , W 2 )   pairs   such   that   f ( W 1 , W 2 )   are   equal , ( X 1 , X 2 )   are   identically   distributed .
A rate tuple ( L 1 , L 2 ) is said to be achievable if there exists a secure computation scheme, for which the correctness and security constraints are satisfied. The closure of the set of all achievable rate tuples is called the optimal rate region.
The main result of this work is a new achievable scheme for secure computation and the new scheme works for any joint distribution of ( W 1 , W 2 ) , so we do not specify explicitly this joint distribution. Further, for simplicity, we introduce the problem statement as a scalar coding problem. Concrete distributions will be given and L-length extensions (block inputs) will be considered when they play more significant roles in the results, e.g., when we discuss ϵ -error schemes in Section 4.2 and converse results in Section 3.3.

3. The Main Coding Scheme

In this section, we present a novel secure computation code that implements the expand-and-randomize scheme over the finite field F q and the ring of integers modulo n, denoted as Z n . Let us start with relevant definitions.
Definition 1 (Confusable Sets and Randomizer).
Sets S 0 , S 1 , S 2 , are called confusable sets if they form a partition of all elements from F q or Z n and there exists a uniform random variable γ over a set S F q or Z n such that s S i , γ × s is uniform over S i . γ is called the randomizer.
The requirement in Definition 1 is stronger than what is needed for security. It suffices to have identical (instead of uniform) distributions over some disjoint set (instead of the confusable set). However, for our proposed scheme, it turns out that these relaxations do not lead to improved achievable rate regions such that they are not considered for simplicity. The notions introduced in Definition 1 are closely related to group actions (and orbits) studied in group theory [23]. Along this line, the main effort of this work is to identify a class of group actions that can be used in secure computation and to prove that the class found indeed forms valid group actions. Our proof is relatively elementary, relying on basic number theoretic properties. It is possible to alternatively prove the statements using group actions. We view our main findings as identifying which operations form group actions and their applicability to secure computation (while the proof of validity can be done in various ways). Another related concept that has been studied in cryptography is randomized polynomials [24,25], which also rely on extra randomization to expand the deterministic function to be computed. Our work used different randomization techniques.
Definition 2 (Feasible Expanded Function).
For a function f ( W 1 , W 2 ) , a function f ˜ ( W ˜ 1 , W ˜ 2 ) = W ˜ 1 + W ˜ 2 over F q or Z n is called a feasible expanded function if the mapping between W 1 and W ˜ 1 , the mapping between W 2 and W ˜ 2 , and the mapping between f ( W 1 , W 2 ) and the index of the confusable set to which f ˜ ( W 1 ˜ , W ˜ 2 ) belongs are all invertible.
For an example of a feasible expanded function (for the equality function with ternary inputs) over F 3 , see Figure 2. Specifically, F 3 = { 0 , 1 , 2 } = S 0 S 1 , where S 0 = { 0 } , S 1 = { 1 , 2 } . γ is uniform over S = { 1 , 2 } . γ × 1 and γ × 2 are both uniformly distributed over S 1 . W ˜ 1 = W 1 , W ˜ 2 = W 2 . f is the equality function and f ˜ is W 1 W 2 . The Yes output of f is mapped to S 0 over f ˜ and the No output of f is mapped to S 1 over f ˜ . For an example of a feasible expanded function over Z 6 , see Figure 3.
A feasible expanded function as defined above naturally leads to a correct and secure computation scheme, presented in the following theorem.
Theorem 1.
For any function f ( W 1 , W 2 ) , if we have a feasible expanded function f ˜ ( W ˜ 1 , W ˜ 2 ) = W ˜ 1 + W ˜ 2 over F q or Z n , then the following computation code is both correct and secure:
X 1 = γ × W ˜ 1 + z , X 2 = γ × W ˜ 2 z
where Z = ( z , γ ) , γ is the randomizer, z and γ are independent, and z is uniform over F q or Z n . Specifically, in this scheme, Alice and Bob each sends a symbol from F q or Z n to Carol.
Proof of  Theorem 1.
The proof of correctness and security follows in a straightforward manner from the definitions of the confusable sets, the randomizer, and the feasible expanded function. First, we consider the correctness constraint. To recover f ( W 1 , W 2 ) with no error, Carol may compute X 1 + X 2 = γ × ( W ˜ 1 + W ˜ 2 ) , from which Carol can uniquely identify the index of the confusable set (invertible to the original function output). Note that by the definition of the confusable sets and the randomizer, multiplying with γ does not change the confusable set index. Second, we consider the security constraint (6). Consider any ( W 1 , W 2 ) pairs that produce the same f ( W 1 , W 2 ) output, and we show that ( X 1 , X 2 ) are identically distributed. To see this, note that ( X 1 , X 2 ) is invertible to ( X 1 + X 2 , X 2 ) = ( γ × ( W ˜ 1 + W ˜ 2 ) , γ × W ˜ 2 z ) . By the definition of the confusable sets, X 1 + X 2 is uniform over the confusable set that corresponds to f ( W 1 , W 2 ) ; X 2 is independent of X 1 + X 2 and is uniform over F q or Z n due to the uniformity and independence of z. Therefore, ( X 1 + X 2 , X 2 ) are always uniform thus are identically distributed (so are ( X 1 , X 2 ) ). The proof is complete. □
The coding scheme in Theorem 1 relies on the structure of the confusable sets and the randomizer upon which feasible expanded functions are built. Thus, it is crucial to understand the structure of the confusable sets and the randomizer, i.e., which set of elements can be used as the randomizer and how the algebraic object is partitioned to confusable sets. This structure problem is addressed next, through algebraic characterizations. The finite field case is considered in Section 3.1 and the ring of integers modulo n case is considered in Section 3.2.

3.1. Finite Field

We first recall some basic facts of finite fields (refer to standard textbooks such as that in [26]). A finite field F q exists only when q = p n , where p is a prime and n is a positive integer. F q has q = p n elements. Any two fields with p n elements are isomorphic, thus F q is referred to as the finite field. The p n elements of F q are the polynomials a 0 + a 1 x + a 2 x 2 + a n 1 x n 1 , where a i { 0 , 1 , , p 1 } , i { 0 , 1 , , n 1 } . The addition and multiplication operations over F q are defined modulo h ( x ) , where h ( x ) is an irreducible polynomial of degree n that always exists. The non-zero elements of F q form a multiplicative group, denoted as F q × . F q × is a cyclic group { 1 , g , g 2 , , g q 2 } that can be generated by a primitive element g F q × . Denote g 0 = 1 .
Example 1.
The finite field F 2 3 can be constructed by addition and multiplication modulo h ( x ) = x 3 + x + 1 . The multiplicative group F 2 3 × = { 1 , x , x + 1 , x 2 , x 2 + 1 , x 2 + x , x 2 + x + 1 } can be generated by g = x .
g 2 = x 2 , g 3 = x 3 mod ( x 3 + x + 1 ) = x + 1 , g 4 = x 2 + x ,
g 5 = x 3 + x 2 mod ( x 3 + x + 1 ) = x 2 + x + 1 ,
g 6 = x 3 + x 2 + x mod ( x 3 + x + 1 ) = x 2 + 1 ,
g 7 = x 3 + x mod ( x 3 + x + 1 ) = 1 = g 0 .
Equipped with the above results (in particular, the cyclic property of the multiplicative group F q × ), we are ready to the state in the following theorem the algebraic characterization of the confusable sets and the randomizer over F q .
Theorem 2.
For F q where q = p n , p is a prime, n is an integer, and g is a primitive element of F q × , the confusable sets and the randomizer can be chosen as follows. Consider any divisor d of p n 1 , i.e., b = ( p n 1 ) / d is an integer.
γ is   uniform   over S = { g 0 , g d , g 2 d , , g ( b 1 ) d } ,
F q = S 0 S 1 S 2 S d ,
S 0 = { 0 } , S i = { g i 1 , g d + i 1 , g 2 d + i 1 , , g ( b 1 ) d + i 1 } , i { 1 , 2 , , d } .
Remark 1.
In words, the elements of a confusable set are such that their discrete logarithms have the same remainder modulo a divisor of p n 1 .
Before we prove Theorem 2, let us first understand it through an example and use it to securely compute a function.
Example 2.
Consider F 7 = { 0 , 1 , 2 , 3 , 4 , 5 , 6 } . A primitive element of F 7 × is 3. Setting d = 3 , the following confusable sets are given by Theorem 2.
S 0 = { 0 } , S 1 = { 3 0 , 3 3 } = { 1 , 27 } mod 7 = { 1 , 6 } , S 2 = { 3 1 , 3 4 } = { 3 , 4 } , S 3 = { 3 2 , 3 5 } = { 2 , 5 } .
Consider the function f ( W 1 , W 2 ) shown in Figure 4, for which a feasible expanded function can be built upon the confusable sets given above.
While the primitive element g of F q × is guaranteed to exist, there is no analytic formula for it and finding it computationally is difficult in general. Further, given the polynomial representation of g, it is generally non-trivial to determine the minimum field size q such that there exists a feasible expanded function over F q for a specific function f. A list of confusable sets for all finite fields F q , q < 20 is given in Figure A1 (see the Appendix A).
Proof of Theorem 2.
To verify that the definition of the confusable sets is satisfied, we only need to show that s S i , i { 0 , 1 , , d } , γ × s is uniform over S i . This is proved as follows. When i = 0 , S 0 = { 0 } so that s = 0 and γ × s = { 0 } = S 0 . When i { 1 , , d } , consider any element from S i , e.g., s = g j d + i 1 , j { 0 , 1 , , b 1 } . We have
γ × s = { g 0 , g d , g 2 d , , g ( b 1 ) d } × g j d + i 1
= { g j d + i 1 , g ( j + 1 ) d + i 1 , g ( j + 2 ) d + i 1 , , g ( j + b 1 ) d + i 1 }
= { g i 1 , g d + i 1 , g 2 d + i 1 , , g ( b 1 ) d + i 1 } = S i
where (18) follows from the fact that g b d = g p n 1 = 1 and the observation that any b consecutive integers form the same set under modulo b, i.e., { 0 , 1 , , b 1 } = { j , j + 1 , , j + b 1 } mod b . As γ is uniform, γ × s is uniform (over S i ) as well. □

3.2. Ring of Integers Modulo n

To facilitate the presentation of the algebraic characterization of the confusable sets and the randomizer over Z n , we first introduce some definitions and preliminary results.
Definition 3 (Set of Integers with Same gcd).
Consider any proper divisor d of a given integer n, i.e., d < n and n / d is an integer. We denote by Z n ( d ) the set of integers in Z n so that their greatest common divisors with n are d, i.e., Z n ( d ) = { a Z n | gcd ( a , n ) = d } .
For example, suppose n = 15 = 3 × 5 , which has proper divisors 1, 3, 5. Then,
Z 15 ( 1 ) = { 1 , 2 , 4 , 7 , 8 , 11 , 13 , 14 } , Z 15 ( 3 ) = { 3 , 6 , 9 , 12 } , Z 15 ( 5 ) = { 5 , 10 } .
Further , Z 15 = { 0 , 1 , , 14 } = { 0 } Z 15 ( 1 ) Z 15 ( 3 ) Z 15 ( 5 ) .
The set Z n ( 1 ) has been extensively studied in abstract algebra (see e.g., [23]) and number theory (see, e.g., in [27]), and is referred to as the multiplicative group of integers modulo n (it turns out to form a group under multiplication modulo n), so we adopt the standard existing notation Z n × = Z n ( 1 ) .
Note that
Z n ( d ) = { a Z n | gcd ( a , n ) = d } = d × { a Z n / d | gcd ( a , n / d ) = 1 } = d × Z n / d × .
For example,
Z 15 ( 3 ) = { 3 , 6 , 9 , 12 } = 3 × { 1 , 2 , 3 , 4 } = 3 × Z 5 × , Z 15 ( 5 ) = { 5 , 10 } = 5 × { 1 , 2 } = 5 × Z 3 × .
We present an important result on the projection of a multiplicative subgroup of Z n × over Z d × in the following lemma. To differentiate set and multiset (where an element might appear several times), we use the notation { ¯ H } ¯ for a multiset H.
Lemma 1.
Consider an arbitrary subgroup G n of Z n × (under multiplication modulo n). When we take G n modulo d (where d is a divisor of n and d 1 ), we have multiple copies of a subgroup of Z d × (under multiplication modulo d), i.e., G n mod d = { ¯ G d , G d , , G d } ¯ , where G d is a subgroup of Z d × .
In Lemma 1, G denotes a subgroup and the subscript specifies the original group. The proof of Lemma 1 is presented in Section 3.4. Here, for illustration, we give an example.
Example 3.
Consider a subgroup G 15 = { 1 , 11 } of Z 15 × . We have G 15 mod 3 = { 1 , 2 } so that G 3 = { 1 , 2 } , which is a subgroup of (in fact, equal to) Z 3 × = { 1 , 2 } . G 15 mod 5 = { ¯ 1 , 1 } ¯ , which is two copies of G 5 = { 1 } , and G 5 is a (trivial) subgroup of Z 5 × = { 1 , 2 , 3 , 4 } .
Consider another subgroup G 15 = { 1 , 4 , 11 , 14 } of Z 15 × . G 15 mod 3 = { ¯ 1 , 1 , 2 , 2 } ¯ , which is two copies of G 3 = { 1 , 2 } = Z 3 × . G 15 mod 5 = { ¯ 1 , 4 , 1 , 4 } ¯ , which is two copies of G 5 = { 1 , 4 } and G 5 is a subgroup of Z 5 × = { 1 , 2 , 3 , 4 } .
Consider G 15 = Z 15 × = { 1 , 2 , 4 , 7 , 8 , 11 , 13 , 14 } . G 15 mod 3 = { ¯ 1 , 2 , 1 , 1 , 2 , 2 , 1 , 2 } ¯ , which is 4 copies of G 3 = { 1 , 2 } = Z 3 × . G 15 mod 5 = { ¯ 1 , 2 , 4 , 2 , 3 , 1 , 3 , 4 } ¯ , which is 2 copies of G 5 = { 1 , 2 , 3 , 4 } = Z 5 × .
Given a subgroup G d of the group Z d × , we may partition Z d × into cosets (see, e.g., Proposition 4 in Chapter 3 of [23] or Theorem 6.2 of [28]). Setting d as n / d , we have that Z n / d × may be partitioned into cosets with G n / d . Combining with (21), i.e., Z n ( d ) = d × Z n / d × , we may partition Z n ( d ) into cosets with G n / d . This partition is denoted by Z n ( d ) / G n / d .
Example 4.
Continuing from Example 3, consider a subgroup G 15 = { 1 , 11 } of Z 15 × . Then,
Z 15 × / G 15 = { 1 , 11 } { 2 , 7 } { 4 , 14 } { 8 , 13 }
where the partition is obtained from the cosets, e.g., { 2 , 7 } = 2 × { 1 , 11 } = 7 × { 1 , 11 } is a coset of G 15 with representative 2 Z 15 × or 7 Z 15 × . Similarly, when G 15 = { 1 , 11 } , from Example 3 we have G 3 = { 1 , 2 } , G 5 = { 1 } and the partitions Z 15 ( 5 ) / G 3 , Z 15 ( 3 ) / G 5 are as follows:
Z 15 ( 5 ) / G 3 = 5 × { 1 , 2 } , Z 15 ( 3 ) / G 5 = 3 × { 1 } { 2 } { 3 } { 4 } .
For another choice of G 15 (again from Example 3), consider G 15 = { 1 , 4 , 11 , 14 } of Z 15 × . Then, from Example 3, G 3 = { 1 , 2 } , G 5 = { 1 , 4 } . The partitions are
d = 1 : Z 15 × / G 15 = { 1 , 4 , 11 , 14 } { 2 , 7 , 8 , 13 } , d = 3 : Z 15 ( 3 ) / G 5 = 3 × { 1 , 4 } { 2 , 3 } , d = 5 : Z 15 ( 5 ) / G 3 = 5 × { 1 , 2 } .
For the final choice of G 15 = Z 15 × from Example 3, we have G 3 = Z 3 × , G 5 = Z 5 × and the partitions are trivial- Z 15 × / G 15 = Z 15 × , Z 15 ( 3 ) / G 5 = 3 × Z 5 × , and Z 15 ( 5 ) / G 3 = 5 × Z 3 × .
The collection of the cosets Z n ( d ) / G n / d for all proper divisors d is a feasible choice of the confusable sets. This result is stated in the following theorem.
Theorem 3.
For Z n , the confusable sets and the randomizer can be chosen as follows. Consider the set of all proper divisors of n, { d 1 = 1 , d 2 , , d b } and an arbitrary subgroup G n of Z n × .
γ is   uniform   over S = G n ,
Z n = S 0 S 1 S 2 = { 0 } Z n × / G n Z n ( d 2 ) / G n / d 2 Z n ( d b ) / G n / d b .
Before presenting the proof of Theorem 3, we first give an example to illustrate its meaning.
Example 5.
Continuing from Example 4, consider a subgroup G 15 = { 1 , 11 } of Z 15 × . Then, from Theorem 3, the confusable sets are
Z 15 = { 0 } Z 15 × / G 15 Z 15 ( 3 ) / G 5 Z 15 ( 5 ) / G 3
= ( 23 ) ( 24 ) { 0 } { 1 , 11 } { 2 , 7 } { 4 , 14 } { 8 , 13 } { 3 } { 6 } { 9 } { 12 } { 5 , 10 } .
For each of the confusable set above, it is easy to verify that when an element is multiplied with γ (uniform over G 15 ), the result is uniform over the confusable set.
{ 2 , 7 } = γ × 2 = γ × 7 , { 5 , 10 } = γ × 5 = γ × 10 , γ × 3 = { 1 , 11 } × 3 = { ¯ 3 , 3 } ¯ .
For another example, consider G 15 = { 1 , 4 , 11 , 14 } . The confusable sets are
Z 15 = { 0 } Z 15 × / G 15 Z 15 ( 3 ) / G 5 Z 15 ( 5 ) / G 3
= ( 25 ) { 0 } { 1 , 4 , 11 , 14 } { 2 , 7 , 8 , 13 } { 3 , 12 } { 6 , 9 } { 5 , 10 } .
Let us also verify that the uniform property holds. γ is over G 15 = { 1 , 4 , 11 , 14 } . For example, consider 7 { 2 , 7 , 8 , 13 } , then we have γ × 7 = { 7 , 28 , 77 , 98 } mod 15 = { 7 , 13 , 2 , 8 } . Consider 12 { 3 , 12 } , then we have γ × 12 = { ¯ 12 , 48 , 132 , 168 } ¯ mod 15 = { ¯ 12 , 3 , 12 , 3 } ¯ , which is 2 copies of { 3 , 12 } .
Finally, consider G 15 = Z 15 × . The confusable sets are Z 15 = { 0 } Z 15 × Z 15 ( 3 ) Z 15 ( 5 ) . For any element in Z 15 ( 3 ) , say 6, we have γ × 6 = Z 15 × × 6 = { ¯ 6 , 12 , 9 , 12 , 3 , 6 , 3 , 9 } ¯ .
Proof of Theorem 3.
The proof relies on Lemma 1 and the property of cosets. First, the confusable sets form a partition of Z n . Second, we verify the uniform property, i.e., s S i , γ × s is uniform over S i . Consider any S i , e.g., a set from Z n ( d i ) / G n / d i , i { 1 , , b } . From the construction of S i , we have S i × 1 / d i is a coset of G n / d i in Z n / d i × . By the definition of cosets and the fact that s / d i S i × 1 / d i , we have
S i × 1 / d i = G n / d i × s / d i .
Next, consider
( G n × s / d i ) mod n / d i = ( G n mod n / d i ) × s / d i mod n / d i
= Lemma   1 { ¯ G n / d i , , G n / d i } ¯ × s / d i mod n / d i
= ( 33 ) { ¯ S i × 1 / d i , , S i × 1 / d i } ¯ mod n / d i
γ × s = G n × s = d i × ( G n × s / d i )
= ( 36 ) { ¯ S i , , S i } ¯ .
Therefore, γ × s is uniform over S i . The proof is complete. □
Remark 2.
From Theorem 3, we see that any subgroup of Z n × can induce a feasible choice of the confusable sets and the randomizer. We list all possible confusable sets for Z n , n < 20 in Figure A2 (see the Appendix B). We also include in the Appendix C some discussion on the structures of the subgroups of Z n × , based on existing group theory and number theory results.

3.3. Converse

One of the challenges to understand the optimality of secure computation codes is the lack of converse results. In information theory, converse results are statements of impossibility claims and are used to prove optimality. As a starting point, we compare our achievable scheme with existing converse results with no security constraint (i.e., the pure computation problem). Interestingly, when the size of the underlying field or ring is the same as the input size, the scheme in Theorem 1 achieves the information theoretically optimal rate region. Without loss of generality, for secure computation problems, we assume there are no identical rows or columns in the function table (as Carol cannot learn anything about the exact row or column index of such identical rows and columns).
Proposition 1.
Consider independent and uniform inputs, i.e., W 1 , W 2 are independent and uniform over { 0 , 1 , , m 1 } . For a function f ( W 1 , W 2 ) , if a feasible expanded function exists over F q or Z n where q = m or n = m , then the scheme in Theorem 1 is information theoretically optimal.
Achievability directly follows from Theorem 1 and converse ( H ( X 1 ) , H ( X 2 ) log 2 m ) follows from a simple observation that when there is no security constraint, Alice (Bob) needs to tell Carol the exact value of W 1 ( W 2 ). The reason is that otherwise two W 1 ( W 2 ) will be mapped to the same codeword X 1 ( X 2 ) and f ( W 1 , W 2 ) has no identical rows or columns such that some value of f ( W 1 , W 2 ) cannot be decoded correctly. This (and more general) result has been proved in several different contexts in the literature, see, e.g., the classical function computation of correlated sources work by Han and Kobayashi [29] (Lemma 1) and the recent generalization [30], the computation over multiple access channel work [31] (Lemma 1), and the network coding for computing work [32,33]. Note that the converse holds for block inputs as well, where the rate is defined as the number of bits in the codeword per input symbol. As eliminating the security constraint cannot help, the same converse holds for the secure computation problem as well.
Note that Proposition 1 characterizes the optimal rate region for a class of secure computation problems (which contain infinite instances). One could start from the confusable sets of F q or Z n and invert them into a function f ( W 1 , W 2 ) with input size m = q or n. Functions constructed from this method satisfy Proposition 1 and thus we obtain the optimal rate region.
To the best of our knowledge, the only existing information theoretic converse results for the secure computation problem are the ones obtained in [6], whose expression involves common information terms and an optimization over a class of distributions so that the exact bound needs to be evaluated for each individual instance and is generally not trivial to compute. Interestingly, for some small instances, we find that our achievable scheme is information theoretically optimal (see Remark 3 of Example 6 and Remark 4 of Example 7). For most cases, however, there is a gap in the rate region between the achievable scheme in Theorem 1 and the converse results from in [6], while it is not clear if and by how much the scheme and the converse can be improved. The model considered in [6] is the general secure computation problem that allows interactive multi-round protocols. Therefore, the converse results therein might be generally too strong for the minimal secure computation problem. We note that there are instances where we know better schemes than that in Theorem 1 (see Examples 9 and 10 in the discussion section).

3.4. Proof of Lemma 1

The proof of Lemma 1 consists of two parts.
First, we show that the set of elements of G n mod d , G d , forms a subgroup of Z d × . This is proved by two claims: (1) G d Z d × and (2) G d is closed under multiplication modulo d. Note that for finite groups, the verification of subgroups only requires the check of the closure property (i.e., associativity and the existence of identity and inverse elements are automatically guaranteed. Refer to Proposition 1 in Chapter 2 of [23]).
For (1), note that any element g of G n belongs to Z n × , so gcd ( g , n ) = 1 . As d is a divisor of n, we have gcd ( g , d ) = 1 and gcd ( g mod d , d ) = 1 . Thus, g mod d of G d belongs to Z d × and G d Z d × .
For (2), consider any two elements of G d , e.g., g 1 , g 2 G n and g 1 mod d , g 2 mod d G d . As G n forms a group, we have for some g 3 G n , ( g 1 × g 2 ) mod n = g 3 , i.e., g 1 × g 2 = k × n + g 3 for some integer k. Then,
( g 1 mod d ) × ( g 2 mod d ) mod d = ( g 1 × g 2 ) mod d
= ( k × n + g 3 ) mod d
= g 3 mod d ( d   is   a   divisor   of   n )
G d
Therefore G d is closed under multiplication.
Second, we show that in the multiset G n mod d , each element of G d appears for the same number of times. Denote G n = { g 1 , g 2 , , g T } . As G n is a subgroup of Z n × , we have
i { 1 , , T } , ( G n × g i ) mod n = { g 1 × g i , g 2 × g i , , g T × g i } mod n = G n .
Denote the multiset G ¯ d = G n mod d = { ¯ h 1 , , h 1 , h 2 , , h Q } ¯ , where h q , q { 1 , , Q } appears | h q | times and q 1 q 2 , h q 1 h q 2 . Assume without loss of generality that | h 1 | | h 2 | | h Q | . We need to show that | h 1 | = | h Q | . This proof is presented next.
From the first part of the proof, we know that G d = { h 1 , h 2 , , h Q } is a subgroup of Z d × . Applying (43) to G d and Z d × , we have
j { 1 , , Q } , ( G d × h j ) mod d = { h 1 × h j , h 2 × h j , , h Q × h j } mod d = G d .
Further, setting j = 1 in (44), we have
( G d × h 1 ) mod d = { h 1 × h 1 , h 2 × h 1 , , h Q × h 1 } mod d = G d = { h 1 , , h Q } .
Note that multiplication mod d is commutative. Then, there exists j { 1 , , Q } such that
( h j × h 1 ) mod d = ( h 1 × h j ) mod d = h Q .
As h j G d , there exists i { 1 , , T } such that
g i mod d = h j .
On the one hand,
( G ¯ d × h j ) mod d = { ¯ h 1 × h j , , h 1 × h j | h 1 | times , h 2 × h j , , h Q × h j } ¯ mod d = ( 46 ) ( 44 ) { ¯ h Q , , h Q | h 1 | times , h 1 , , h Q 1 } ¯
On the other hand,
( G ¯ d × h j ) mod d = ( 47 ) ( G ¯ d × g i ) mod d = ( G n mod d ) × g i mod d
= ( G n × g i ) mod d
= ( 43 ) G n mod d
= G ¯ d = { ¯ h 1 , , h Q 1 , h Q , , h Q | h Q | times } ¯
Comparing (48) and (52) (i.e., the number of times that h Q appears), we have proved that | h 1 | = | h Q | . The proof of the second part, and thus the proof of the lemma, are now complete.

4. Generalization

In this section, we consider several generalizations of the coding scheme presented in the previous section, to illustrate how the insights generalize beyond the basic setting.

4.1. Optimized Additive Randomness

In the coding scheme presented in Theorem 1, the additive common randomness z appeared in the codewords X 1 , X 2 is uniform over F q or Z n (refer to (7)), which is not necessary but a universal and convenient choice that works for all cases and admits a simple proof. We show, through the following example, that an optimized z (which does not have full-support over Z n ) might help to further reduce the communication cost.
Example 6.
Consider the function f ( W 1 , W 2 ) shown in Figure 5, where a feasible expanded function over Z 4 is also depicted. The confusable sets are obtained from Theorem 3 using G 4 = Z 4 × = { 1 , 3 } .
Z 4 × = { 0 } { 1 , 3 } { 2 } , γ is   uniform   over G 4 = { 1 , 3 } .
From Theorem 1, Alice will send X 1 = γ × W ˜ 1 + z and Bob will send X 2 = γ × W ˜ 2 z to Carol, where γ is uniform over { 1 , 3 } , z is uniform over Z 4 = { 0 , 1 , 2 , 3 } , and γ , z are independent. That is, Alice and Bob each sends a symbols from Z 4 (i.e., 2 bits) to Carol.
Interestingly, if we choose z to be uniform over { 0 , 2 } (instead of uniform over { 0 , 1 , 2 , 3 } ), the scheme will also work. Correctness remains the same and for security, we only need ( X 1 , X 2 ) (or equivalently ( X 1 , X 1 + X 2 ) ) to be identically distributed when ( W 1 , W 2 ) { ( 0 , 0 ) , ( 0 , 1 ) } . Note that
( W 1 , W 2 ) = ( 0 , 0 ) ( W ˜ 1 , W 2 ˜ ) = ( 1 , 0 ) ( X 1 , X 1 + X 2 ) = ( γ + z , γ ) , ( W 1 , W 2 ) = ( 0 , 1 ) ( W ˜ 1 , W 2 ˜ ) = ( 1 , 2 ) ( X 1 , X 1 + X 2 ) = ( γ + z , γ × 3 ) .
Both ( γ + z , γ ) and ( γ + z , γ × 3 ) are uniform over { ( 1 , 1 ) , ( 3 , 1 ) , ( 3 , 3 ) , ( 1 , 3 ) } . Therefore, the scheme satisfies the security constraint. Importantly, now X 2 = γ × W 2 ˜ z can only take value 0 or 2. Therefore, Bob only needs to send 1 bit (instead of 2 bits) to Carol.
Remark 3.
If variable-length codes are allowed, then the above code can be further improved. Specifically, Alice does not need to distinguish whether X 1 is 1 or 3, e.g., Alice may simply send 1 when X 1 is 1 or 3 (this happens when W ˜ 1 = 1 ). Interestingly, the rate region of this code coincides with an existing converse result from Theorem 9 in [6] for any joint distribution of ( W 1 , W 2 ) with full support. Thus, this improved code with optimized additive randomness and variable-length codewords tuns out to be information theoretically optimal (i.e., even if block codes are allowed). We also note that an alternative optimal code construction based on a different idea is presented in [6] (see Algorithm 3).
For a general given function f ( W 1 , W 2 ) , to find the optimal choice of z of minimum randomness, we may list all identically distributed conditions in the security constraint (such as (54)) and solve for the z variable that satisfies all the constraints and has minimum entropy (a uniform full-support z will always work but has maximum entropy).

4.2. ϵ -Error Schemes with Block Codes

Hitherto, we have focused exclusively on scalar codes and zero-error schemes that work for any joint distribution of ( W 1 , W 2 ) . In this subsection, we show how to use classical source coding techniques (specifically, structured linear codes, or Korner–Marton coding [34]) that exploit the specific distribution of ( W 1 , W 2 ) to improve the communication rate when long block codes and vanishing-error are allowed. This is explained through the following binary AND function example.
Example 7.
Consider the binary AND function f ( W 1 , W 2 ) = W 1 A N D W 2 , for which a feasible expanded function over F 3 is shown in Figure 6. The confusable sets are obtained from Theorem 2 using the primitive element g = 2 of F 3 × and the divisor d = 1 .
F 3 = { 0 } { 2 0 , 2 1 } = { 0 } { 1 , 2 } , γ is   uniform   over { 1 , 2 } .
Then, from Theorem 1, we set X 1 = γ × W ˜ 1 + z , X 2 = γ × W 2 ˜ z so that it suffices to send a symbol from F 3 (i.e., log 2 3 bits) each from Alice and Bob to Carol. In other words, the rate tuple ( log 2 3 , log 2 3 ) is achievable. As mentioned in the introduction, this zero-error scalar code first appeared in Appendix B of [12].
We note that for correct decoding, Carol will compute X 1 + X 2 = γ × ( W 1 ˜ + W 2 ˜ ) , denoted by U. As our goal is only to recover U (securely of course), the amount of information required is simply the entropy of U (which is smaller than log 2 3 bits as long as it is not uniform). The only caveat is that encoding is done in a distributed manner at Alice and Bob respectively, so we just need to compress U with a linear code such that it is compatible with the decoding procedure of X 1 + X 2 . Fortunately, this distributed source compression for sum computation problem has been studied in network information theory. In particular, structured linear codes apply and we will use (the secure version of) Korner–Marton coding [34].
The improvement of the communication rate comes from the observation that in our proposed code, we consider the worst case, i.e., U is not compressed and a symbol from F 3 is sent to represent X 1 regardless of the distribution of U, the variable we wish to recover. When U is not uniform, further compression over long blocks is possible. As a simple example, suppose W 1 and W 2 are two independent uniform binary variables. As a result, U = X 1 + X 2 = γ × ( W ˜ 1 + W ˜ 2 ) is 0 with probability 1 / 4 , is 1 with probability 3 / 8 , and is 2 with probability 3 / 8 (see Figure 6) and the entropy of U is
H ( U ) = H 1 4 , 3 8 , 3 8 = 1 4 ( 11 3 log 2 3 ) < log 2 3 .
Next, we outline how to use structured linear source codes to achieve the rate tuple ( R 1 , R 2 ) = ( H ( U ) log 2 3 , H ( U ) log 2 3 ) bits per input symbol over long block-length with vanishing probability of error. Consider L-length extension of the two inputs W 1 , W 2 , denoted by W 1 , W 2 , i.e., W 1 , W 2 are two sequences of i.i.d. uniform bits of length L. A similar vector notation is used for L-length extensions of other variables, e.g., z represents a length L sequence of i.i.d. uniform symbols over F 3 . We apply our proposed scheme to each bit of the input sequence and then multiply (over F 3 ) the vector codeword with a matrix A of size ( H ( U ) + ϵ ) L × L .
X 1 = A ( H ( U ) + ϵ ) L × L · ( γ L × 1 × W ˜ 1 L × 1 + z L × 1 ) , X 2 = A ( H ( U ) + ϵ ) L × L · ( γ × W ˜ 2 z ) X 1 + X 2 = A ( H ( U ) + ϵ ) L × L · ( γ × ( W ˜ 1 + W ˜ 2 ) ) U = A ( H ( U ) + ϵ ) L × L · U
where the ‘+’ and ‘×’ operators are symbol-wise, and the ‘·’ operator is the matrix multiplication operator. Further optimizations of the common randomness consumption are possible, i.e., the same randomizer can be used for each input bit and it suffices to use an additive common randomness variable with entropy L H ( U ) log 2 3 bits (instead of L log 2 3 bits). Note that the same matrix A must be used by both Alice and Bob. We need to ensure that from A ( H ( U ) + ϵ ) L × L · U , we can recover U . In other words, we now have the well-known point-to-point source coding problem with a linear compressor. Thus, there exists a deterministic matrix A of size ( H ( U ) + ϵ ) L × L such that we can recover U from A · U with ϵ probability of error and ϵ 0 when L . Specifically, a random generation of A (i.e., choosing each element of A independently and uniformly over F 3 ) will work with high probability. The structured linear coding technique has appeared in the literature many times, e.g., it was introduced by Elias in the context of channel coding over a binary symmetric channel [35], was used by Wyner in the context of distributed source coding of binary sources (the Slepian-Wolf problem, see Section VI. C of [36]), was used by Korner and Marton in the context of encoding module-two sum of binary sources [34], and generalizations to finite fields are immediate (see, e.g., in [37] and Remark 10.2 of [38]).
The security constraint is easily verified. The scalar code is secure by Theorem 1. Then independent application of the scalar code to L-length extensions is also secure. Multiplying with a deterministic matrix A will not leak any information.
Therefore, the optimized block code has vanishing probability of error and is secure. The achieved rate tuple is ( H ( U ) log 2 3 , H ( U ) log 2 3 ) bits for the codewords per input bit.
Finally, we note that the idea of using Korner–Marton coding for secure computation is not new, e.g., it has been applied to secure sum computations [6,39]. While our objective is not sum computation, Korner–Marton coding still applies because the decoding procedure X 1 + X 2 relies on a linear operation.
Remark 4.
Interestingly, the zero-error code presented above for AND computation is information theoretically optimal in terms of the communication rate (refer to Theorem 11 of [6]). That is, communicating log 2 3 bits per input bit from Alice and Bob each to Carol is the minimum possible. Note that as ϵ-error codes achieve an improved rate performance than the best of that of zero-error codes, we know that for secure computation problems, ϵ-error capacity may be different from zero-error capacity (this fact has been established in prior work [6,39]). We also note that when ϵ-error is allowed, the optimal rate region for AND computation remains open.
The above linear compression technique applies to all secure computation codes over F q (refer to Theorem 2), i.e., instead of sending a symbol from F q , we may compress it to H ( U ) log 2 q bits. However, we note that the same result does not hold for codes over Z n when n is not a prime. While the same linear compression technique can be applied, the rate performance is not known. That is, it is not known how large the matrix A needs to be, if we wish to recover U from A · U . In particular, H ( U ) may not suffice (we need to understand more on the open problem of source coding with restricted encoding structures, e.g., modular arithmetic. For related results on source coding with group codes, see, e.g., in [40,41] and references therein).

4.3. Equality Function with Non-Prime-Power Inputs

In this subsection, we continue the discussion on the equality function in the introduction. We consider the equation function with arbitrary input size, i.e., W 1 , W 2 { 0 , 1 , , m 1 } for an arbitrary integer m and wish to securely compute if W 1 is equal to W 2 . The approach taken in the introduction works when m is a prime power p n such that there exist an invertible mapping between { 0 , 1 , , p n 1 } and the elements from the finite field F p n (say W 1 ( W 2 ) is mapped to W 1 ˜ ( W 2 ˜ ) ). Then, W 1 ˜ W 2 ˜ is a feasible expanded function where the zero element and the non-zero elements are two confusable sets. Now, what if m is not a prime power? We may increase m to a prime power and then use the previous approach. This approach will require that Alice and Bob each sends a symbol of size larger than log 2 m bits. Interestingly, we show that log 2 m bits are always sufficient for any m (no matter whether m is a prime power or not). To this end, we need to use a variant of the expand-and-randomize scheme from Theorem 1. To illustrate the idea, in the following we consider the simplest example where m is not a prime power, i.e., m = 6 .
Example 8.
Consider W 1 , W 2 { 0 , 1 , , 5 } . f ( W 1 , W 2 ) = Yes if W 1 is equal to W 2 and otherwise f ( W 1 , W 2 ) = No . While 6 is not a prime power, we may decompose it into products of prime powers, i.e., 6 = 2 × 3 . The following scheme works by using a product of our expand-and-randomize schemes over decomposed domains with uniformly permuted inputs, i.e., using two schemes of equality function with prime inputs in parallel.
Alice and Bob share a common random variable Z = ( π , γ 1 , γ 2 , z 1 , z 2 ) , where all the random variables are independent and uniform, π is from the set of all possible permutations with 6 elements, γ 1 is from F 2 × = { 1 } , z 1 is from F 2 = { 0 , 1 } , γ 2 is from F 3 × = { 1 , 2 } , and z 2 is from F 3 = { 0 , 1 , 2 } . The codewords are
X 1 = ( a 1 , a 2 ) , w h e r e a 1 = F 2 γ 1 × ( π ( W 1 ) mod 2 ) + z 1 , a 2 = F 3 γ 2 × ( π ( W 1 ) mod 3 ) + z 2 ; X 2 = ( b 1 , b 2 ) , w h e r e b 1 = F 2 γ 1 × ( π ( W 2 ) mod 2 ) + z 1 , b 2 = F 3 γ 2 × ( π ( W 2 ) mod 3 ) + z 2 .
The ‘+’ and ‘×’ operations in computing a 1 , b 1 ( a 2 , b 2 ) are over F 2 ( F 3 ) . Note that the same permutation π is applied to W 1 and W 2 . The decoding rule of Carol is as follows.
C a r o l   c l a i m s   W 1 i s   e q u a l   t o   W 2   i f   a n d   o n l y   i f ( a 1 , a 2 ) = ( b 1 , b 2 ) .
We have zero error because π ( W 1 ) = π ( W 2 ) if and only if ( π ( W 1 ) mod 2 , π ( W 1 ) mod 3 ) = ( π ( W 2 ) mod 2 , π ( W 2 ) mod 3 ) (this result is typically referred to as the Chinese Remainder Theorem). Next, we verify that the security constraint is satisfied, i.e., when W 1 is not equal to W 2 , X 1 , X 2 are identically distributed. Consider any ( W 1 , W 2 ) such that W 1 W 2 and π ( W 1 ) π ( W 2 ) . Note that ( X 1 , X 2 ) is invertible to ( a 1 , a 2 , b 1 a 1 , b 2 a 2 ) , and the 3 variables a 1 , a 2 , ( b 1 a 1 , b 2 a 2 ) are independent. Further, a 1 is uniform over { 0 , 1 } , a 2 is uniform over { 0 , 1 , 2 } , and ( π ( W 1 ) , π ( W 2 ) ) is uniform over ( i , j ) , i j , i , j { 0 , 1 , 2 , 3 , 4 , 5 } so that
Pr π ( W 1 ) mod 2 = π ( W 2 ) mod 2 , π ( W 1 ) mod 3 π ( W 2 ) mod 3 = 2 / 5 ,
Pr π ( W 1 ) mod 2 π ( W 2 ) mod 2 , π ( W 1 ) mod 3 = π ( W 2 ) mod 3 = 1 / 5 ,
Pr π ( W 1 ) mod 2 π ( W 2 ) mod 2 , π ( W 1 ) mod 3 π ( W 2 ) mod 3 = 2 / 5
( b 1 a 1 , b 2 a 2 ) is   uniform   over { ( 0 , 1 ) , ( 0 , 2 ) , ( 1 , 0 ) , ( 1 , 1 ) , ( 1 , 2 ) } .
Therefore, security is guaranteed and sending 1 + log 2 3 = log 2 6 bits from Alice and Bob to Carol each is sufficient.
Remark 5.
The above scheme generalizes in a straightforward manner to any integer m using a prime-power decomposition of m (this result is typically referred to as the fundamental theorem of arithmetic). As the achieved communication rate log 2 m bits for Alice and Bob each matches the optimal rate for independent and uniform inputs with no security constraint (see Proposition 1), the above secure computation scheme achieves the information theoretically optimal rate region.

5. Discussion

We introduced the expand-and-randomize scheme for the secure computation problem and implement it over the finite field and the ring of integers modulo n. We characterized the algebraic structures of the feasible expanded functions through the notion of confusable sets. We find it interesting that while we consider only information theoretic security, the tools invoked from algebra and number theory arise frequently and lie in the core of cryptography under computational security (see textbooks, e.g., in [42,43]). The proposed scheme is very efficient and sometimes optimal when the original function is (close to) an isomorphism of such confusable sets. In particular, the information theoretically optimal communication cost is characterized for a new class of functions that include an infinite number of instances (refer to Proposition 1). However, we are also aware of functions where there exist better schemes such that our scheme is strictly sub-optimal. In the following, we present two such examples to expose more diverse insights for the challenging open problem of minimal secure computation.

Sub-Optimal Examples

Example 9.
Consider the function shown in Figure 7. A feasible expanded function over F 7 is also depicted. The confusable sets are obtained from Theorem 2 using the primitive element g = 3 of F 7 × and the divisor d = 2 .
F 7 = { 0 } { 3 0 , 3 2 , 3 4 } { 3 1 , 3 3 , 3 5 } = { 0 } { 1 , 2 , 4 } { 3 , 5 , 6 } , γ is   uniform   over { 1 , 2 , 4 } .
Therefore, according to Theorem 1, it suffices to send a symbol from F 7 (i.e., log 2 ( 7 ) bits) each from Alice and Bob to Carol. In other words, the rate tuple ( log 2 ( 7 ) , log 2 ( 7 ) ) is achievable.
However, an improved rate tuple ( 2 , 2 ) is achievable using a different coding scheme. Specifically, we use the coding scheme from Section 2 of [12]. The scheme (when applied to this example) is described as follows. The common random variable shared is Z = ( π , z 1 , z 2 ) , where π , z 1 , z 2 are independent and uniform binary random variables. The codewords are
X 1 = ( 1 , z 1 ) w h e n   π = 0 , W 1 = 0 ( 2 , z 2 ) w h e n   π = 0 , W 1 = 1 ( 2 , z 1 ) w h e n   π = 1 , W 1 = 0 ( 1 , z 2 ) w h e n   π = 1 , W 1 = 1 , X 2 = ( f ( 0 , W 2 ) + z 1 , f ( 1 , W 2 ) + z 2 ) w h e n   π = 0 ( f ( 1 , W 2 ) + z 2 , f ( 0 , W 2 ) + z 1 ) w h e n   π = 1
where X 1 and X 2 each contains 2 bits. The coding idea is that the first (second) row of the function table is protected by the uniform noise z 1 ( z 2 ). Bob does not know the value of W 1 , so he will send both f ( 0 , W 2 ) and f ( 1 , W 2 ) (after masked by z i ) in a random order. Alice will use the first element of X 1 to indicate if the first or the second element of X 2 contains the desired function and use the second element to carry the noise that masks the desired function output. Therefore, the decoding rule is as follows. Denote X 1 = ( a 1 , a 2 ) and X 2 = ( b 1 , b 2 ) . Carol can recover f ( W 1 , W 2 ) with no error from b a 1 a 2 . Security is guaranteed by the observation that Carol only knows the noise that masks the desired function while obtains nothing else. Therefore in this scheme, sending 2 < log 2 7 bits each by Alice and Bob is sufficient.
Example 10.
Consider the function shown in Figure 8. A feasible expanded function over Z 8 is also depicted. The confusable sets are obtained from Theorem 3 using the subgroup G 8 = { 1 , 3 } of Z 8 × = { 1 , 3 , 5 , 7 } .
Z 8 = { 0 } { 1 , 3 } { 5 , 7 } { 2 , 6 } { 4 } , γ is   uniform   over G 8 = { 1 , 3 } .
Therefore, according to Theorem 1, it suffices to send a symbol from Z 8 (i.e., 3 bits) each from Alice and Bob to Carol. In other words, the rate tuple ( 3 , 3 ) is achievable.
However, an improved rate tuple is achievable using a different coding scheme. To see this, we assume that W 1 , W 2 are independent and each of them is uniform over its support (note that the scheme above does not depend on the joint distribution of ( W 1 , W 2 ) ). We now describe a scheme that achieves the rate tuple ( 2 , log 2 3 ) , which is strictly better than ( 3 , 3 ) . This scheme is inspired by Algorithm 3 of [6], which is for the function in Example 6 and we generalize it to the function in Figure 8. Alice and Bob share the common random variable Z = ( z , z ) , where z , z are independent uniform binary random variables. The codewords sent are
X 1 = W 1 , z w h e n W 1 = 0 W 1 , z w h e n W 1 = 1 , X 2 = ( W 2 + z ) F 2 w h e n W 2 { 0 , 1 } 2 w h e n W 2 = 2
where X 1 contains 2 bits and X 2 contains log 2 3 bits. Obviously z is useless and it appears here to produce a fixed-length code. An important feature of this function is that from f ( W 1 , W 2 ) , we can always recover W 1 . Therefore, W 1 is always sent from Alice. Further, W 2 should be protected when W 2 { 0 , 1 } and W 1 = 0 . Thus, W 2 is protected by a uniform noise z and when it should be revealed (i.e., when W 1 = 1 ), Alice will send the noise z; otherwise, when it should be protected, Alice will send an independent (thus useless) noise z . After the coding idea is explained, the decoding rule is now obvious. Carol will first check the value of W 1 (sent from Alice). If W 1 = 0 , Carol will claim f = 0 if X 2 { 0 , 1 } and f = 1 if X 2 = 2 . If W 1 = 1 , Carol will claim f = 4 if X 2 = 2 and otherwise if X 2 { 0 , 1 } , Carol will use z to recover W 2 and based on ( W 1 , W 2 ) , f is decoded with no error. Security is guaranteed because when ( W 1 , W 2 ) { ( 0 , 0 ) , ( 0 , 1 ) } , for both cases we have that X 1 , X 2 are independent, in X 1 = ( W 1 , z ) , W 1 is fixed to 0, z is uniform and X 2 = W 2 + z is also uniform. Therefore, using this improved scheme, it suffices to send 2 < 3 bits by Alice and log 2 3 < 3 bits by Bob, respectively.
Going forward, while the characterization of the algebraic structures of the confusable sets over F q and Z n is given, i.e., the confusable sets in Theorems 2 and 3 are complete (a simple consequence of the confusable sets definition is the closure property under multiplication), the algorithmic aspect of the expand-and-randomize scheme over F q and Z n is wide open, i.e., we do not have efficient algorithms that can help us quickly identify (minimal) feasible expanded functions when they exist. The solutions to current examples are mainly found through the lists of confusable sets in Appendix A and Appendix B. Efficient algorithms that can classify functions and realize associations with isomorphisms of confusable sets are of immediate interest, but appear challenging at the same time. Going beyond the finite field and the ring of integers modulo n, it is interesting to explore other widely studied algebraic objects in abstract algebra [23], e.g., the matrix ring and the polynomial ring. Generally speaking, the expand-and-randomize scheme captures the idea of embedding the function to compute in another function that guarantees security. The potential of this general embedding theme remains to be fully explored. Finally, we note that while we focus on the basic model of minimal (non-interactive three-user) secure computation, the proposed scheme generalizes immediately to interactive protocols (by first interactively generating the common randomness) and to more users (the notions of expanded functions generalize in a natural manner). Exploration of the proposed scheme to various models of secure computation [11] is an interesting research avenue.

Author Contributions

Conceptualization, Y.Z. and H.S.; methodology, Y.Z. and H.S.; formal analysis, Y.Z. and H.S.; writing—original draft preparation, Y.Z. and H.S.; writing—review and editing, Y.Z. and H.S.; funding acquisition, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by funding from NSF grants CCF-2007108 and CCF-2045656.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Confusable Sets of F q , q < 20

Figure A1. A list of confusable sets for F q , q < 20 . S is the set of elements over which the randomizer γ is uniformly distributed. g is a primitive element of F q × and h ( x ) is an irreducible polynomial for F p n , n > 1 .
Figure A1. A list of confusable sets for F q , q < 20 . S is the set of elements over which the randomizer γ is uniformly distributed. g is a primitive element of F q × and h ( x ) is an irreducible polynomial for F p n , n > 1 .
Entropy 23 01461 g0a1

Appendix B. Confusable Sets of Z n , n < 20

Figure A2. A list of confusable sets for Z n , n < 20 , n is not a prime (prime n has been covered in Figure A1).
Figure A2. A list of confusable sets for Z n , n < 20 , n is not a prime (prime n has been covered in Figure A1).
Entropy 23 01461 g0a2

Appendix C. Useful Properties of Subgroups of Z n ×

  • It is established by Gauss that Z n × is cyclic if and only if n { 2 , 4 , p b , 2 p b } , where p is an odd prime and b is a positive integer (see, e.g., Theorem 42 in [27]). Z n × can be generated by a single element g, typically referred to as the primitive root modulo n. After g is found, we can enumerate all subgroups of Z n × (similar to Theorem 2). There is no analytic formula or fast algorithm to find the primitive root modulo n in general (see Section 1.4 of [44]).
  • For other values of n not covered above, we do not have a full understanding of all the subgroups of Z n × in general. A useful approach is prime-power decomposition, based on the Chinese Remainder Theorem (see, e.g., Section 7.6 in [23]). Z n × is a direct product of the groups corresponding to each of its prime power factors, i.e., Z n × = Z p 1 k 1 × × Z p 2 k 2 × , where n = p 1 k 1 p 2 k 2 for distinct primes p i and integers k i . Any product of subgroups is a subgroup of the product group. However, the reverse argument is not true, i.e., some subgroup of Z n × cannot be written as a product of subgroups of Z p i k i . To go beyond subgroup products, we may resort to the fundamental theorem of finite Abelian group (see, e.g., Section 5.2 in [23]), which help decompose Z n × to direct groups of Z b (see Theorem 1.4.1 in [44]) and then the subgroup enumeration problem becomes that of counting the subgroups of a finite abelian group, where the case of product of two groups Z b 1 × Z b 2 is fully solved and otherwise open [45,46,47] (for analytic solutions of some simple cases, see in [48]). Finally, the total number of subgroups of Z n has order O ( log n log log n ) [49].

References

  1. Beimel, A.; Orlov, I. Secret sharing and non-shannon information inequalities. IEEE Trans. Inf. Theory 2011, 57, 5634–5649. [Google Scholar] [CrossRef]
  2. Martín, S.; Padró, C.; Yang, A. Secret sharing, rank inequalities, and information inequalities. IEEE Trans. Inf. Theory 2016, 62, 599–609. [Google Scholar] [CrossRef]
  3. Sun, H.; Jafar, S.A. The Capacity of Private Information Retrieval. IEEE Trans. Inf. Theory 2017, 63, 4075–4088. [Google Scholar] [CrossRef]
  4. Banawan, K.; Ulukus, S. The Capacity of Private Information Retrieval from Coded Databases. IEEE Trans. Inf. Theory 2018, 64, 1945–1956. [Google Scholar] [CrossRef]
  5. Lee, E.J.; Abbe, E. Two shannon-type problems on secure multi-party computations. In Proceedings of the 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 30 September–3 October 2014; pp. 1287–1293. [Google Scholar]
  6. Data, D.; Prabhakaran, V.M.; Prabhakaran, M.M. Communication and randomness lower bounds for secure computation. IEEE Trans. Inf. Theory 2016, 62, 3901–3929. [Google Scholar] [CrossRef] [Green Version]
  7. Zhou, Y.; Sun, H.; Fu, S. On the Randomness Cost of Linear Secure Computation. In Proceedings of the 2019 53rd Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 20–22 March 2019; pp. 1–6. [Google Scholar]
  8. Yao, A.C. Protocols for secure computations. In Proceedings of the 23rd Annual Symposium on Foundations of Computer Science (Sfcs 1982), Chicago, IL, USA, 3–5 November 1982; pp. 160–164. [Google Scholar]
  9. Ben-Or, M.; Goldwasser, S.; Wigderson, A. Completeness theorems for non-cryptographic fault-tolerant distributed computation. In Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing, Chicago, IL, USA, 2–4 May 1988; pp. 1–10. [Google Scholar]
  10. Chaum, D.; Crépeau, C.; Damgard, I. Multiparty unconditionally secure protocols. In Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing, Chicago, IL, USA, 2–4 May 1988; pp. 11–19. [Google Scholar]
  11. Cramer, R.; Damgard, I.B.; Nielsen, J.B. Secure Multiparty Computation and Secret Sharing; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
  12. Feige, U.; Killian, J.; Naor, M. A minimal model for secure computation. In Proceedings of the Twenty-Sixth Annual ACM Symposium on Theory of Computing, Montreal, QC, Canada, 23–25 May 1994; pp. 554–563. [Google Scholar]
  13. Applebaum, B.; Holenstein, T.; Mishra, M.; Shayevitz, O. The communication complexity of private simultaneous messages, revisited. In Annual International Conference on the Theory and Applications of Cryptographic Techniques; Springer: Berlin, Germany, 2018; pp. 261–286. [Google Scholar]
  14. Ishai, Y.; Kushilevitz, E. Private simultaneous messages protocols with applications. In Proceedings of the Fifth Israeli Symposium on Theory of Computing and Systems, Ramat Gan, Israel, 17–19 June 1997; pp. 174–183. [Google Scholar]
  15. Beimel, A.; Kushilevitz, E.; Nissim, P. The complexity of multiparty PSM protocols and related models. In Annual International Conference on the Theory and Applications of Cryptographic Techniques; Springer: Berlin, Germany, 2018; pp. 287–318. [Google Scholar]
  16. Assouline, L.; Liu, T. Multi-Party PSM, Revisited. Cryptology ePrint Archive, Report 2019/657. 2019. Available online: https://eprint.iacr.org/2019/657 (accessed on 3 October 2021).
  17. Beimel, A.; Gabizon, A.; Ishai, Y.; Kushilevitz, E.; Meldgaard, S.; Paskin-Cherniavsky, A. Non-interactive secure multiparty computation. In Annual Cryptology Conference; Springer: Berlin, Germany, 2014; pp. 387–404. [Google Scholar]
  18. Benhamouda, F.; Krawczyk, H.; Rabin, T. Robust non-interactive multiparty computation against constant-size collusion. In Annual International Cryptology Conference; Springer: Berlin, Germany, 2017; pp. 391–419. [Google Scholar]
  19. Yoshida, M.; Obana, S. On the (in) efficiency of non-interactive secure multiparty computation. Des. Codes Cryptogr. 2018, 86, 1793–1805. [Google Scholar] [CrossRef] [Green Version]
  20. Agarwal, N.; Anand, S.; Prabhakaran, M. Uncovering Algebraic Structures in the MPC Landscape. In Annual International Conference on the Theory and Applications of Cryptographic Techniques; Springer: Berlin, Germany, 2019; pp. 381–406. [Google Scholar]
  21. Halevi, S.; Ishai, Y.; Kushilevitz, E.; Rabin, T. Best possible information-theoretic MPC. In Theory of Cryptography Conference; Springer: Berlin, Germany, 2018; pp. 255–281. [Google Scholar]
  22. Beimel, A.; Ishai, Y.; Kushilevitz, E. Ad hoc PSM protocols: Secure computation without coordination. In Annual International Conference on the Theory and Applications of Cryptographic Techniques; Springer: Berlin, Germany, 2017; pp. 580–608. [Google Scholar]
  23. Dummit, D.S.; Foote, R.M. Abstract Algebra; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  24. Ishai, Y.; Kushilevitz, E. Randomizing polynomials: A new representation with applications to round-efficient secure computation. In Proceedings of the 41st Annual Symposium on Foundations of Computer Science, Redondo Beach, CA, USA, 12–14 November 2000; pp. 294–304. [Google Scholar]
  25. Yuval, I.; Kushilevitz, E. Perfect constant-round secure computation via perfect randomizing polynomials. In International Colloquium on Automata, Languages, and Programming; Springer: Berlin, Germany, 2002; pp. 244–256. [Google Scholar]
  26. Lidl, R.; Niederreiter, H. Finite Fields; Cambridge University Press: Cambridge, UK, 1997; Volume 20. [Google Scholar]
  27. Shanks, D. Solved and Unsolved Problems in Number Theory; Chelsea Publishing Company: New York, NY, USA, 1978. [Google Scholar]
  28. Judson, T. Abstract Algebra: Theory and Applications; Stephen F. Austin State University: Nacogdoches, TX, USA, 2014. [Google Scholar]
  29. Han, T.S.; Kobayashi, K. A Dichotomy of Functions F(x,y) of Correlated Sources (X,Y) from the Viewpoint of the Achievable Rate Region. IEEE Trans. Inf. Theory 1987, 33, 69–76. [Google Scholar] [CrossRef]
  30. Kuzuoka, S.; Watanabe, S. On distributed computing for functions with certain structures. IEEE Trans. Inf. Theory 2017, 63, 7003–7017. [Google Scholar] [CrossRef] [Green Version]
  31. Nazer, B.; Gastpar, M. Computation over multiple-access channels. IEEE Trans. Inf. Theory 2007, 53, 3498–3516. [Google Scholar] [CrossRef]
  32. Appuswamy, R.; Franceschetti, M.; Karamchandani, N.; Zeger, K. Network coding for computing: Cut-set bounds. IEEE Trans. Inf. Theory 2011, 57, 1015–1030. [Google Scholar] [CrossRef] [Green Version]
  33. Huang, C.; Tan, Z.; Yang, S.; Guang, X. Comments on cut-set bounds on network function computation. IEEE Trans. Inf. Theory 2018, 64, 6454–6459. [Google Scholar] [CrossRef] [Green Version]
  34. Korner, J.; Marton, K. How to encode the modulo-two sum of binary sources. IEEE Trans. Inf. Theory 1979, 25, 219–221. [Google Scholar] [CrossRef]
  35. Elias, P. Coding for noisy channels. IRE Conv. Rec. 1955, 3, 37–46. [Google Scholar]
  36. Wyner, A. Recent results in the shannon theory. IEEE Trans. Inf. Theory 1974, 20, 2–10. [Google Scholar] [CrossRef]
  37. Csiszar, I. Linear codes for sources and source networks: Error exponents, universal coding. IEEE Trans. Inf. Theory 1982, 28, 585–592. [Google Scholar] [CrossRef]
  38. Gamal, A.E.; Kim, Y.-H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  39. Data, D.; Dey, B.K.; Mishra, M.; Prabhakaran, V.M. How to securely compute the modulo-two sum of binary sources. In Proceedings of the 2014 IEEE Information Theory Workshop (ITW 2014), Hobart, Australia, 2–5 November 2014; pp. 496–500. [Google Scholar]
  40. Sahebi, A.G.; Pradhan, S.S. Abelian group codes for channel coding and source coding. IEEE Trans. Inf. Theory 2015, 61, 2399–2414. [Google Scholar] [CrossRef] [Green Version]
  41. Heidari, M.; Pradhan, S.S. How to compute modulo prime-power sums. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 1824–1828. [Google Scholar]
  42. Katz, J.; Lindell, Y. Introduction to Modern Cryptography; Chapman and Hall/CRC: Boca Raton, FL, USA, 2014. [Google Scholar]
  43. Shoup, V. A Computational Introduction to Number Theory and Algebra; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  44. Cohen, H. A Course in Computational Algebraic Number Theory; Springer Science & Business Media: New York, NY, USA, 2013; Volume 138. [Google Scholar]
  45. Tărnxaxuceanu, M. An arithmetic method of counting the subgroups of a finite abelian group. Bull. Math. Soc. Sci. Math. Roum. 2010, 53, 373–386. [Google Scholar]
  46. Tóth, L. Subgroups of finite abelian groups having rank two via Goursat’s lemma. Tatra Mt. Math. Publ. 2014, 59, 93–103. [Google Scholar] [CrossRef] [Green Version]
  47. Bauer, K.; Sen, D.; Zvengrowski, P. A generalized goursat lemma. arXiv 2011, arXiv:1109.0024. [Google Scholar] [CrossRef] [Green Version]
  48. Petrillo, J. Counting subgroups in a direct product of finite cyclic groups. Coll. Math. J. 2011, 42, 215–222. [Google Scholar] [CrossRef]
  49. Martin, G.; Troupe, L. The distribution of the number of subgroups of the multiplicative group. J. Aust. Math. Soc. 2017, 108, 46–97. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The minimal secure computation problem [12].
Figure 1. The minimal secure computation problem [12].
Entropy 23 01461 g001
Figure 2. The expand-and-randomize coding scheme for the equal function. In the function table, each row corresponds to a value of W 1 and each column corresponds to a value of W 2 . When the function output is a random variable, the set of possible values is shown in the table. The functions W 1 W 2 and γ × ( W 1 W 2 ) are computed over the finite field F 3 and γ is uniform over the set { 1 , 2 } .
Figure 2. The expand-and-randomize coding scheme for the equal function. In the function table, each row corresponds to a value of W 1 and each column corresponds to a value of W 2 . When the function output is a random variable, the set of possible values is shown in the table. The functions W 1 W 2 and γ × ( W 1 W 2 ) are computed over the finite field F 3 and γ is uniform over the set { 1 , 2 } .
Entropy 23 01461 g002
Figure 3. The expand-and-randomize coding scheme for the selected-switch function. The expanded function W ˜ 1 + W ˜ 2 and the randomized expanded function γ × ( W ˜ 1 W ˜ 2 ) are defined over the ring of integers modulo 6, Z 6 . γ is uniform over { 1 , 5 } , the set of integers that are coprime with 6.
Figure 3. The expand-and-randomize coding scheme for the selected-switch function. The expanded function W ˜ 1 + W ˜ 2 and the randomized expanded function γ × ( W ˜ 1 W ˜ 2 ) are defined over the ring of integers modulo 6, Z 6 . γ is uniform over { 1 , 5 } , the set of integers that are coprime with 6.
Entropy 23 01461 g003
Figure 4. An expand-and-randomize secure computation code over F 7 , where γ is uniform over { 1 , 6 } .
Figure 4. An expand-and-randomize secure computation code over F 7 , where γ is uniform over { 1 , 6 } .
Entropy 23 01461 g004
Figure 5. An expand-and-randomize secure computation code over Z 4 , where γ is uniform over { 1 , 3 } .
Figure 5. An expand-and-randomize secure computation code over Z 4 , where γ is uniform over { 1 , 3 } .
Entropy 23 01461 g005
Figure 6. An expand-and-randomize secure computation code (for the binary AND function) over F 3 , where γ is uniform over { 1 , 2 } .
Figure 6. An expand-and-randomize secure computation code (for the binary AND function) over F 3 , where γ is uniform over { 1 , 2 } .
Entropy 23 01461 g006
Figure 7. An expand-and-randomize secure computation code over F 7 , where γ is uniform over { 1 , 2 , 4 } .
Figure 7. An expand-and-randomize secure computation code over F 7 , where γ is uniform over { 1 , 2 , 4 } .
Entropy 23 01461 g007
Figure 8. An expand-and-randomize secure computation code over Z 8 , where γ is uniform over { 1 , 3 } .
Figure 8. An expand-and-randomize secure computation code over Z 8 , where γ is uniform over { 1 , 3 } .
Entropy 23 01461 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, Y.; Sun, H. Expand-and-Randomize: An Algebraic Approach to Secure Computation. Entropy 2021, 23, 1461. https://doi.org/10.3390/e23111461

AMA Style

Zhao Y, Sun H. Expand-and-Randomize: An Algebraic Approach to Secure Computation. Entropy. 2021; 23(11):1461. https://doi.org/10.3390/e23111461

Chicago/Turabian Style

Zhao, Yizhou, and Hua Sun. 2021. "Expand-and-Randomize: An Algebraic Approach to Secure Computation" Entropy 23, no. 11: 1461. https://doi.org/10.3390/e23111461

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop