Previous Article in Journal
Composable Privacy-Preserving Framework for Stakes-Based Online Peer-to-Peer Applications
Previous Article in Special Issue
Enhancing Security for Resource-Constrained Smart Cities IoT Applications: Optimizing Cryptographic Techniques with Effective Field Multipliers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Secure Multi-Party Computation for Multi-Dimensional Arithmetics and Its Applications †

1
Beijing Institute of Mathematical Sciences and Applications, Beijing 101408, China
2
School of Mathematics and Physics, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in The 23rd International Conference on Cryptology and Network Security (CANS 2024), Cambridge, UK, 24–27 September 2024.
Cryptography 2025, 9(3), 50; https://doi.org/10.3390/cryptography9030050
Submission received: 1 April 2025 / Revised: 1 July 2025 / Accepted: 2 July 2025 / Published: 3 July 2025
(This article belongs to the Special Issue Cryptography and Network Security—CANS 2024)

Abstract

Over years of development in secure multi-party computation (MPC), many sophisticated functionalities have been made practical, and multi-dimensional operations occur more and more frequently in MPC protocols, especially in protocols involving datasets of vector elements, such as privacy-preserving biometric identification and privacy-preserving machine learning. In this paper, we introduce a new kind of correlation, called tensor triples, which is designed to make multi-dimensional MPC protocols more efficient. We will discuss the generation process, the usage, and the applications of tensor triples and show that they can accelerate privacy-preserving biometric identification protocols, such as FingerCode, Eigenfaces, and FaceNet, by more than 1000 times, with reasonable offline costs, and grant pre-computability for the secure matrix multiplication process in privacy-preserving machine learning protocols, such as SecureML and SecureNN, while achieving similar efficiency.

1. Introduction

1.1. Motivation

Secure multi-party computation (MPC) is one of the central subfields in cryptography. MPC aims to accomplish a joint evaluation of a function over inputs provided by multiple parties without revealing any extra information. Due to its feature of protecting the inputs, it has been widely used in the analysis and processing of private or sensitive data held by multiple parties. Over years of development, circuit-based MPC has gradually gained the capability of practically computing more and more sophisticated and large-scale functions. Apart from one-dimensional numeric computation, the necessity of handling higher-dimensional operations has also become a concern. For instance, people for years have explored the possibility of the application of MPC on privacy-preserving biometric identification [1,2,3,4,5,6,7,8,9], privacy-preserving machine learning [10,11,12,13,14], and many other practical fields that require an intensive use of vector and matrix operations, such as vector tensoring and matrix multiplication. A detailed description is provided below.
  • Privacy-Preserving Machine Learning
    The training process of machine learning models often rely on large-scale data collected from multiple sources, such as personal information, medical records, financial transactions, and user behavior logs. Directly sharing raw data among parties poses significant privacy risks and may violate legal regulations. On the other hand, when making use of these models, inevitable exchanges of information about sensitive data, such as personal information and private model parameters between the servers and the clients, also necessitate secure protocols for privacy protection. Privacy-preserving machine learning aims to enable collaborative model training and inference without exposing sensitive data. The core challenge in privacy-preserving machine learning is secure computation over encrypted or distributed data. Many machine learning algorithms, from the most basic functionalities such as linear regression and clustering to more sophisticated model structures such as convolution neural networks and large language models (LLMs), rely heavily on matrix operations, particularly matrix multiplication, for procedures like forward propagation, back propagation, and gradient updates. Secure matrix product protocols are therefore crucial in privacy-preserving machine learning, enabling parties to jointly compute matrix products without revealing individual data entries.
  • Privacy-Preserving Biometric Identification
    Biometric identification systems rely on physiological or behavioral traits such as fingerprints, iris scans, facial features, and gait patterns for authentication and recognition. Traditional biometric systems store and process biometric templates in raw forms, and usually in centralized databases, raising concerns about data security, unauthorized access, and identity theft. Privacy-preserving biometric identification techniques aim to enable secure biometric matching while preventing information leakage. In privacy-preserving biometric identification, the secure matrix product plays a crucial role in similarity computations, such as secure batched dot products and squared Euclidean distance computation, where biometric features are often represented as high-dimensional vectors or matrices. Secure matrix product protocols are therefore crucial in privacy-preserving biometric identification for comparisons of encrypted biometric templates without revealing their contents.
Unlike the straightforward homomorphic encryption methods, which often behave less computationally efficient, it is crucial to find a way to accelerate multi-dimensional arithmetics for circuit-based MPC protocols. For instance, the SPDZ framework has been used to boost multi-dimensional MPC protocols [15].
Meanwhile, apart from the classical correlation generation protocols such as an OT and Beaver triples, researchers have also attempted to find new kinds of correlations (for example, [16]) that can fulfill different needs of MPC protocols and perform more efficiently compared to classical ways [17,18,19,20]. Among these variants, vector oblivious linear evaluation (VOLE), as an efficient way to provide correlated random vector sharings for two parties, has been widely used as a convenient auxiliary tool to accomplish circuit-based MPC involving multi-dimensional inputs. Therefore, we would like to explore a way to boost MPC protocols involving multi-dimensional objects, such as vectors and matrices, and we would like to take advantage of the correlation properties of VOLE to assist the procedure.

1.2. Our Contribution

The goal of this paper is to explore a more efficient and flexible way to fulfill multi-dimensional secure multi-party computation. In this paper, we used a new kind of correlation called tensor triples, which can be generated using subfield VOLE. We show that tensor triples can be used to assist and accelerate MPC on multi-dimensional arithmetics with the following nice properties:
  • Efficiency: The tensor triple method behaves much more efficient both computationally and communicatively compared to the usual Beaver’s method when dealing with secure matrix multiplication and the vector outer product.
  • Flexibility: Or pre-computability. The usual matrix triple method for secure matrix multiplication cannot generate triples in the offline phase without the knowledge of the dimensions of the matrices involved in the online phase of the MPC protocol, while the tensor triple method can generate tensor triples beforehand that are suitable and utilizable in the online phase for all matrices as long as the dimensions are below a pre-determined threshold. When the parties are not capable of determining the sizes of the matrices in the pre-computation process or when the matrices involved in the online phase of the protocol may have various and different sizes, it seems to the authors that only the tensor triple method can perform the triple generation process in the offline phase.
We will also present a detailed discussion of the advantages of the tensor triple method in later sections.
As a result, we explore several applications of tensor triples on privacy-preserving biometric identification and privacy-preserving machine learning protocols. More specifically, we realize implementations of three privacy-preserving biometric identification protocols, namely FingerCode [21], Eigenfaces [22], and FaceNet [23]. As for the privacy-preserving machine learning protocols, we compare our tensor triple generation procedure with the straightforward matrix triple method adopted by SecureML [24] and SecureNN [25]. We also analyze the performance of the implementations and prove our claim that the tensor triple method indeed provides a more efficient way to carry out multi-dimensional MPC protocols. Our implementation results show that the online phase of secure squared Euclidean distance computation in 128 batched privacy-preserving Fingercode queries against a database of 1024 references only takes 0.082 s, and that of 80 batched privacy-preserving Eigenfaces queries against a database of 1000 references only takes 0.032 s, both of which perform thousands of times faster than the classical GSHADE framework [6]. Meanwhile, the online phase of our implementation of privacy-preserving FaceNet achieves a speedup by more than 10,000 times compared to previous works. Even if the offline cost is taken into consideration, the implementations still achieve a significant acceleration. As for the privacy-preserving machine learning protocols, our implementation of the secure matrix product possesses a flexibility or pre-computability property compared to the matrix triple method used in SecureML and SecureNN while managing to achieve almost the same online computational and communicative costs.

1.3. Related Works

This paper is an extended version of our paper published in the 23rd international conference on cryptology and network security (CANS 2024) [26].
Abspoel et al. in [16] proposed the concept of the “outer product triple” through linear secret sharing schemes over Galois rings. The authors have also explained that the outer product triples are useful as they can be utilized to assist the secure multi-party computation of outer products between vectors. Also, Boyle et al. in [20] proposed the idea of using subfield VOLE for secure matrix multiplications. The definition of the outer product triple in [16] is equivalent to the notion of the tensor triple we will define in Section 2.3, and we will introduce protocols involving these triples for the secure 2PC of the outer product and matrix multiplication as noted in [20]; further, we will provide a detailed description of the protocols as well as an application of a broader range of parameters.
There also exist generic ways to fulfill multi-dimensional MPC. One practical way is realization through the SPDZ framework (for instance, [15,27]). Since our work only uses comparatively lightweight MPC components such as VOLE, we will show in detailed statistics that our method obtains a high efficiency and greatly outperforms the homomorphic encryption-based protocols.
Similarly, there are multiple works aiming to accelerate the privacy-preserving biometric identification and privacy-preserving machine learning protocols. For instance, for FingerCode, Eigenfaces, and FaceNet, the three protocols discussed in this paper, we are aware of the existence of various works and improvements. Nevertheless, we have not discovered any work using VOLE or similar techniques to improve the efficiency. We will still discuss them in the corresponding sections and show a detailed comparison between different implementations. As introduced earlier, we are also aware of previous privacy-preserving machine learning protocols, such as SecureML and SecureNN, which use the matrix triple method to fulfill MPC protocols. We will show that this method cannot implement a full precomputable triple generation process, while, in our case, the generation procedure is completely precomputable.

1.4. Security Model

All protocols involved in the rest of this paper are secure against semi-honest and computationally bounded adversaries.

1.5. Organization of Paper

In Section 2, we define necessary notions and primitives in MPC and the notion of the tensor triple and briefly explain the intuition and idea of how it can be used to accelerate multi-dimensional circuit-based MPC protocols. We then proceed in Section 3 by introducing several ways to generate tensor triples. The protocols are focused on resolving the generation of tensor triples for two parties, as is the usual need in most practical applications. Then, we explain the usage of tensor triples to accomplish the MPC of basic algebraic operations for scalars, vectors, and matrix operations. In Section 4, we discuss possible applications of tensor triples in practice. It should be emphasized that tensor triples can be used to accelerate all multi-dimensional MPC protocols universally, and the applications that we list in the section are merely a few instances. In Section 5, implementations of the protocols as well as the results of the experiments after the execution are presented. It can been clearly seen that the tensor triple method provides a significant speedup as well as the crucial property of pre-computability.

2. Preliminaries and Notations

2.1. Vectors and Matrices

In this paper we mainly deal with multi-dimensional objects such as vectors and matrices. We use F p to denote a finite field of size p, and we use K to denote a finite field or a finite ring. The inner product of two vectors u , v K n is defined as u · v = i = 1 n u i v i , and the outer product of two vectors u K m , v K n is defined as the matrix u v = ( u i v j ) 1 i , j n M m × n ( K ) . We use both A B and A · B to denote the standard matrix multiplication.

2.2. Oblivious Transfer (OT)

We provide a very brief introduction of an oblivious transfer together with its multiple invariants in order to import all possible notations to be used. In an oblivious transfer [28], the sender with a pair of messages ( m 0 , m 1 ) interacts with the receiver with a choice bit b. The result ensures that the receiver learns m b but obtains no knowledge of m 1 b , while the sender obtains no knowledge of b. In an OT extension protocol OT l n , the input of the sender is n message pairs ( m i , 0 , m i , 1 ) { 0 , 1 } 2 l and the input of the receiver is a string b { 0 , 1 } n . The result allows the receiver to learn m i , b [ i ] for 1 i n . In a random OT (ROT), the sender inputs nothing beforehand but obtains two random strings in the outputs as the message pair, and the receiver inputs nothing as well but obtains the choice bit together with the selected message afterwards. Similarly, a batched version of an ROT (also known as an OT extension; see [29,30,31]) that generates n message pairs of bit-length l is denoted by ROT l n . A correlated OT (COT) [20,32,33] is a variant of an ROT that allows the sender to pre-determine a string Δ and obtain two correlated random strings as the message pair with their XOR equal Δ . The extension of a COT denoted by COT m n allows the sender to choose Δ F 2 m . The protocol eventually provides two uniformly distributed vectors u F 2 n , v F 2 m n to the receiver and v ( u · Δ ) to the sender. A subfield vector oblivious linear evaluation (sVOLE) protocol is a generalization of COT m n to an arbitrary finite base field. A vector oblivious linear evaluation (VOLE) allows one party to obtain two vectors u , v F p n , and the other party to obtain a scalar x F p and the linear evaluation u x + v .
In further chapters, we will be mainly using the random variants of COT m n and sVOLE functionalities defined below in Figure 1. It is not difficult to see that RCOT m n is a special case of F RsVOLE p , m , n when p = 2 .
VOLE protocols with semi-honest and computational security in the OT-hybrid model have been defined in [18,20]. Subfield VOLE, as an important variant of VOLE, has also been implemented in various ways (see [19,20,33,34,35,36]).

2.3. Tensor Triple

In this section, we first define the notion of (additive) vector sharing. By definition, an (additive) secret sharing of a vector v is simply a set { [ v ] 1 , . . . , [ v ] s } of s vectors such that i = 1 s [ v ] i = v , provided the condition that
Pr ( v = x | { [ v ] j } j i ) = Pr ( v = x | { [ v ] j } j i )
for any i = 1 , . . . , s and any x , x in the ambient space.
The notion of a tensor triple has been proposed in various forms (see [16,20]) to more efficiently fulfill the MPC of vectors. We propose the following definition to better facilitate its properties.
Definition 1 
(Tensor triple). By definition, a tensor triple ( u , v , W ) consists of data u K m , v K n , W M m × n ( K ) satisfying u v = u v T = W .
Definition 2 
(Tensor triple sharing). Let P 1 , . . . , P s be the participants of a secure multi-party computation protocol over K. By definition, a tensor triple sharing scheme provides two vectors and one matrix [ u ] i K m , [ v ] i K n , [ W ] i M m × n ( K ) for the participant P i such that [ u ] i , [ v ] i form secret sharings of u K m , v K n , respectively, with the relation u v = u v T = i = 1 s [ W ] i . Shares of u , v , and W will be denoted by [ u ] , [ v ] , and [ W ] if indices are not particularly involved. For our convenience, such a triple will be called an ( m , n ) -triple. A tensor triple sharing scheme is called information-theoretically (or computationally) secure if the distribution of u , v to an adversary who obtains { ( [ u ] i , [ v ] i , [ W ] i ) } i i * for any i * { 1 , . . . , s } is information-theoretically (or computationally) indistinguishable from the uniform one.

3. Tensor-Triple-Based MPC Protocols

3.1. Tensor Triple Generation

Now, we would like to introduce multiple ways to efficiently generate tensor triples. Figure 2 shows the ideal functionality of tensor triple generation. In practice, the generation process can be fulfilled via subfield VOLE. We provide these protocols in detail below.

3.1.1. sVOLE-Based Two-Party Tensor Triple Generation

An OT-based protocol for generating Beaver multiplication triples was proposed in [37]. In [38], the authors established a more efficient protocol based on a correlated-OT extension. In this section, we propose an sVOLE-based method to efficiently generate tensor triples.
Observe that under a fixed F p -vector space isomorphism φ : F p m F p m (for instance, φ ( a 0 + a 1 α + . . . + a m 1 α m 1 ) = ( a 0 , . . . , a m 1 ) , where F p m is realized through an extension F p [ α ] by adding a root of an irreducible monic polynomial of degree m), F RsVOLE p , m , n becomes the following functionality F RsVOLE p , m , n , as shown in Figure 3.
Given a protocol RsVOLE ( m , n , λ ) realizing F RsVOLE p , m , n , where λ is the statistical security parameter, we can easily formulate many ways to generate a tensor multiplication triple. Figure 4 shows a straightforward method.
In particular, when p = 2 , one may also use COT protocols in an analogous way to generate tensor triples over fields with even characteristics.
We use two ways to materialize RsVOLE ( m , n , λ ) . The first method is to use COT for the case when p = 2 , as shown in Figure 5. The idea is the same as the one implemented in [20], which uses masking vectors in F 2 m n . We adapt and formalize the implementation into the following protocol using masking vectors F 2 n .
The second way is to use a silent OT (SOT). The SOT procotol can be directly used to fulfill RsVOLE ( m , n , λ ) . The silent OT process first aims to let the two parties obtain a compressed form of random secret shares u , v with the correlation u = v x e F 2 m N , where e F 2 N is a random, sparse vector held by one party and x F 2 m is a random field element held by the other party. Once this distribution is fulfilled, the two parties can use a public pre-determined binary matrix H to compress the dimension of the shares from N down to n < N , or, more specifically, using a formula to illustrate the process, u H = v H x ( e H ) . As a variant of the learning parity with noise (LPN) assumption asserts, the vector e H is computationally pseudorandom. This, under the classical isomorphism φ , gives exactly a random sVOLE distribution.
To fulfill the distribution of the compressed form, [20] utilizes a GGM PRF tree to obliviously transfer a punctured PRF key k { α } together with the punctured output z = PRF ( k , α ) + x to the receiver. To be more specific, the two parties invoke a total of l = log N parallel OT executions, where the sender’s message pairs consist of sums of “left” and “right” nodes in each layer of the GGM tree and the receiver chooses the messages according to the bit expansion of his chosen path to α . A parallel distribution of multiple punctured PRF keys leads to a sharing of the vector differed by a sporadic vector x e and hence a distribution of the needed correlation. We refer to [20] for a detailed description of the SOT protocol.

3.1.2. Third-Party Tensor Triple Generation

A third-party with computational power may be eligible for providing triples for multiple parties in a much more efficient way. This idea has been explored by many people, such as [39,40]. The flexibility of the tensor triple allows the server to provide triples of fixed dimensions while fulfilling the needs for all lower-dimensional computations. More specifically, the dimensions of the triples may be pre-determined, as a generic ( n , n ) -triple could be tailored to serve as a pair of triples of dimensions ( s , t ) and ( n s , n t ) for any s , t < n . This means that the parties may not need to know the precise dimensions in advance for the pre-processing procedure. A great advantage is that a specialized server may serve as the triple generator for multiple sets of multiple parties in order to speed up all pre-processing procedure. In the setting of the existence of a trusted server, the server-aided tensor triple distribution can be naturally extended to a generic multi-party scenario. As a reference, in [41], the author provides server-aided 3PC and 4PC tensor triple distribution protocols with optimization.

3.2. Secure Multi-Dimensional Arithmetic Evaluations

In this section we will discuss various vector operations that may be performed securely using Beaver’s method. When m = n = 1 , tensor triples are simply usual Beaver triples. Therefore, they can also be used for basic arithmetics. We mainly discuss the multi-dimensional computations below.

3.2.1. Linear Operations and Dot Product

As expected, additions and operations involving constants may all be computed locally without any interaction by the homomorphicity. More specifically, the following operations are securely multi-party computable and the computation can be performed locally without any interaction:
  • [ a ] + [ b ] = [ a + b ] ;
  • Given a public constant vector c , [ a + c ] 1 = [ a ] 1 + c , [ a + c ] i = [ a ] i for i 1 ;
  • c [ a ] = [ c a ] , where c K is a constant number;
  • c · [ a ] = [ c · a ] , where a K n , and c K n is a constant vector;
  • c [ a ] = [ c a ] , where c is a constant vector.
Now, let us consider the dot product of two vectors a , b in the same dimension n. Suppose that we have an ( n , n ) -triple. First, we denote a u = s , b v = t . By Beaver’s method, each party may announce its share of s and t to recover the two vectors. Then, they could securely compute the shares as [ a · b ] = s · t + [ u ] · t + [ v ] · s + tr ( [ W ] ) . Note that the last term uses the fact that the trace function is linear. The correctness can easily be verified by a direct computation. One thing to be noted is that the usage of tensor triples for fulfilling one single secure inner product is rather wasteful, and therefore the tensor triple method is more suitable for the case where a batch of cross inner products are needed. The batching process will be discussed in detail in Section 4.

3.2.2. Outer Product and Matrix Product

Figure 6 shows the ideal functionality for secure multi-party outer product computation. One can use exactly the same method to fulfill the need of an outer product. Let us suppose that all parties would like to compute the outer product a b of two vectors a K m and b K n with the help of an ( m , n ) -triple ( u , v , W ) . First, we denote a u = s , b v = t . Then, similarly, we could compute
a b = ( s + u ) ( t + v ) = s t + u t + s v + W .
Therefore, we could similarly make the protocol described below in Figure 7.
As a direct application of the outer product, we can now introduce a way to securely compute a matrix product of two matrices. Figure 8 shows the functionality of secure multi-party matrix multiplication. In practice, we consider A M m × k ( K ) , B M k × n ( K ) . First, denote the columns of A by A = ( a 1 , . . . , a k ) and the rows of B by B = ( b 1 , . . . , b k ) T . As is well-known, the matrix product has the following outer product expansion formula: A B = i = 1 k a i b i . Therefore, it is easy to formulate a way to compute the matrix product from the outer product primitive. All the parties may simultaneously perform k primitives to compute each summand in the formula above and then individually add the results together without any further interaction. A protocol is described as follows in Figure 9.

3.2.3. Security

Regarding the security of all protocols proposed above, the proofs are somehow evident from the simple observation that the triple always hides the input vectors perfectly and in none of the protocols is the third matrix share published in any sense. This also illustrates the correct and secure usage of tensor triples, namely to randomize the input vectors to announce while always keeping the third matrix share secret. The precise statements of the security are given below.
Theorem 1 
([20]). The COT-based RsVOLE protocol (Figure 5) realizes the functionality F RsVOLE p , m , n in the two-party setting and is semi-honest secure.
Theorem 2. 
The sVOLE-based tensor triple generation protocol TT . Gen ( ) realizes the functionality F TTGen p , m , n in the two-party setting and is semi-honest secure in the F RsVOLE p , m , n -hybrid model.
Proof. 
Due to symmetry, it suffices to consider a semi-honest adversary P 0 .
Security against corrupted P 0 : We construct the simulator SIM Π P 1 , which acts as the participant P 1 as follows.
1.
The simulator invokes F RsVOLE p , m , n as the sender and receives x r F p m , V r F p m n .
2.
The simulator invokes F RsVOLE p , m , n as the receiver and receives y r F p n , U r F p m n .
The following hybrid games are then performed to show the claim of the statement.
H 0 :
The real-world execution of the protocol, where the simulator acts exactly as an honest P 1 .
H 1 :
The ideal-world execution. The simulator acts as SIM Π P 1 . This hybrid is indistinguishable from the original hybrid H 0 by Theorem 1.
Theorem 3. 
The secure outer product protocol Out ( ) realizes the functionality F Out and is semi-honest secure in the F TTGen p , m , n -hybrid model.
Proof. 
The proof is similar to the one in Theorem 2. Due to symmetry, it suffices to consider a semi-honest adversary P i for a fixed i.
Security against corrupted P i : We construct the simulator SIM Π P i ^ , which acts as the other participants as follows.
1.
The simulator receives from the adversary P i the shares [ a ] i K m , [ b ] i K n of two vectors a K m , b K n .
2.
The simulator invokes F TTGen p , m , n as the participant P i and receives [ u ] i K m , [ v ] i K n , [ W ] i M m × n ( K ) .
3.
The simulator samples random vectors [ s ] i ^ K m , [ t ] i ^ K n for all i ^ i and sends to the adversary P i the shares [ s ] i ^ , [ t ] i ^ .
4.
The view of the adversary follows the execution of the protocol with [ s ] i ^ , [ t ] i ^ .
The following hybrid games are then performed to show the claim of the statement.
H 0 :
The real-world execution of the protocol, where the simulator acts exactly as an honest P i .
H 1 :
The ideal-world execution. The simulator acts as SIM Π P i ^ . This hybrid is indistinguishable from the original hybrid H 0 by Theorem 2 and the security of the Beaver method.
Theorem 4. 
The secure matrix product protocol MatProd ( ) realizes the functionality F MatProd and is semi-honest secure in the F Out -hybrid model.
Proof. 
As pointed out in the earlier sections, F MatProd can be seen as individual applications of F Out for a multiple amount of times. Therefore, MatProd ( ) as an application of multiple Out ( ) individually realizes F MatProd . The proof is similar to the one in Theorem 2 and we omit it for brevity. □

3.2.4. Compact Storage of Tensor Triples

Since tensor triples are often generated in the pre-processing phase, the storage of tensor triples is a major concern. As tensor triples can be regarded as a decomposition of matrix triples, the total size of tensor triples used in a generic secure matrix multiplication is comparatively larger than that of classical matrix triples needed in the same protocol. To solve this problem, note that the outer product expansion
A B = i = 1 k a i b i
in terms of the columns of A = ( a 1 , . . . , a k ) M m × k ( K ) and the rows of B = ( b 1 , . . . , b k ) T M k × n ( K ) can be arranged in multiple ways to form more compact decompositions. More specifically, we have the decomposition A B = i = 1 s A i B i , where A i = ( a j 1 + . . . + j i 1 + 1 , . . . , a j 1 + . . . + j i ) M m × j i ( K ) and B i = ( b j 1 + . . . + j i 1 + 1 , . . . , b j 1 + . . . + j i ) T M j i × n ( K ) for arbitrary j 1 , . . . , j s 1 such that j 1 + . . . + j s = k . Therefore, a secure matrix multiplication can be performed using a family of triples ( U i , V i , W i ) for i = 1 , . . . , s where U i M m × j i ( K ) , V i M j i × n ( K ) and W i M m × n ( K ) such that U i V i = W i . Also, note that this type of matrix triple can be generated by an assembly of tensor triples. Therefore, the parties can amalgamate some tensor triples to save space, while preserving adequate original tensor triples. A description of the amalgamation method is given in Figure 10. Note that this can shrink the storage space of tensor triples by approximately 1 / l .
Although this seems an ideal way to shrink the storage space, we need to clarify that, in the process of amalgamation, the triples lose some flexibility, more specifically, if the parties have only generated amalgamated tensor triples of the form ( U , V , W ) , where U M m × l ( K ) , V M l × n ( K ) and W M m × n ( K ) . Then, they can only directly accomplish a secure matrix multiplication of two matrices A M s × k ( K ) and B M k × t ( K ) for s m , t n , and l | k . We refer to Section 4.3.1 for a more detailed explanation of this issue. Hence, the parties need to be careful when choosing the amalgamation size l to balance the storage space and the flexibility of the amalgamated tensor triples. In general, if the parties have generated amalgamated tensor triples of the form ( U i , V i , W i ) for i = 1 , . . . , s where U i M m × l i ( K ) , V i M l i × n ( K ) and W i M m × n ( K ) , then they can securely compute the matrix product of two matrices A M s × k ( K ) and B M k × t ( K ) for s m , t n and any k such that k = i = 1 s l i x i has a non-negative integer solution vector x = ( x 1 , . . . , x s ) .
According to our observation, in practice, the dimensions of matrices involved in matrix multiplication in various biometric identification and machine learning functionalities are multiples of a power of 2 or 10 due to coding or conventional reasons. Hence, in practice, the MPC participants may choose the amalgamation size accordingly, say l = 8 , for instance.
In general, the amalgamation process can also be extended to multiple amalgamation sizes l 1 , . . . , l t . By choosing these sizes wisely, the parties may achieve a better compacting of the storage of tensor triples.
As a last note, although the tensor triples together with the amalgamated variant can all be trimmed, column-wise for the first component and row-wise for the second component, to fulfill the secure computation requirement of two matrices with smaller sizes, they cannot be extended in the same direction directly to fulfill the secure computation of two matrices with larger sizes. One possible solution to such a problem is to try to generate tensor triples with “infinite” dimensions. We refer to [41] for details of such a method.

4. Applications

4.1. Batched Squared Euclidean Distance Computing

Squared Euclidean distance is a widely used and crucial function in biometric identification and machine learning. In biometric identification, it is often the case that the client needs to launch multiple queries and the server needs to compute the squared Euclidean distance between each query with all references in its own dataset. This type of batched query essentially portrays a matrix multiplication functionality. Therefore, one can use a tensor triple to accelerate this process. We will explain this by presenting concrete examples as follows.

4.2. Batched Privacy-Preserving Biometric Identification

In this chapter, we show applications for vector triples on privacy-preserving biometric identification protocols. Biometric identification, such as face recognition and fingerprint recognition, often involves computation between biometric samples and references in a dataset. In this scenario, Euclidean distance computing between a vector and a fixed family of vectors is implemented each time a query is launched. We demonstrate that tensor triples can be used for batched queries in privacy-preserving biometric identification protocols to extremely increase the efficiency.

4.2.1. FingerCode

FingerCode [3,21] is a fingerprint recognition algorithm. In the setting of FingerCode, the server holds a dataset of references Y = ( y 1 T , . . . , y n T ) M k × n ( K ) . The client would like to securely make a batch of queries X = ( x 1 T , . . . , x m T ) T M m × k ( K ) for recognition. The protocol should proceed as described below.
1.
The parties securely compute D = D X + D Y 2 X Y M m × n ( K ) , where D X = ( x 1 x 1 T , . . . , x m x m T ) ( 1 , 1 , . . . , 1 ) and D Y = ( 1 , 1 , . . . , 1 ) ( y 1 y 1 T , . . . , y n y n T ) .
2.
The parties securely compare entries of D with the pre-determined threshold d. Recognize x i as y j if D i j d .
Note that D X and D Y can be computed locally. Therefore, the first step can be fulfilled by implementing MatProd ( X , Y ) using tensor triples. The second step can be carried out using any regular implementation of the secure comparison protocol.

4.2.2. Eigenfaces

Eigenfaces [22] is a classical face recognition algorithm. In the setting of Eigenfaces, the server holds a dataset of Eigenfaces U = ( u 1 T , . . . , u n T ) M k × n ( K ) , an average face u ¯ K k , and a dataset of N projected faces Y = ( y 1 T , . . . , y N T ) M n × N ( K ) . The client would like to securely make a batch of queries X = ( x 1 T , . . . , x m T ) T M m × k ( K ) for recognition. The protocol should proceed as described below.
1.
The parties securely compute X ¯ = ( X U ¯ ) U M m × n ( K ) , where U ¯ = ( u ¯ T , . . . , u ¯ T ) T M m × k ( K ) ;
2.
The parties securely compute D = D X + D Y 2 X ¯ Y M m × N ( K ) , where D X = ( x ¯ 1 x ¯ 1 T , . . . , x ¯ m x ¯ m T ) ( 1 , 1 , . . . , 1 ) and D Y = ( 1 , 1 , . . . , 1 ) ( y 1 y 1 T , . . . , y N y N T ) . More specifically, they use a Beaver triple to compute x ¯ i · x ¯ i and tensor triple to compute X ¯ Y . Also, note that D Y can be computed locally;
3.
The parties securely compare entries of D with the pre-determined threshold d. Recognize x i as y j if D i j d .

4.2.3. FaceNet

FaceNet [23] is a more recent facial recognition system based on deep learning. It was proposed in 2015 and successfully provided a way to generate a very-high-quality mapping from face images to their vector representatives. In its setting, the server holds a dataset of references Y = ( y 1 T , . . . , y n T ) M k × n ( K ) . The client would like to securely make a batch of queries X = ( x 1 T , . . . , x m T ) T M m × k ( K ) for recognition. We assume that all the data have been pre-processed through a well-trained network. The MPC part of the overall protocol proceeds exactly the same as in the FingerCode case and we omit the protocol for brevity.
As a remark, the flexibility of tensor triples allows us to apply all protocols above on a dataset of vectors in an arbitrary dimension. This is extremely useful as dimensions of data points may vary in different settings.

4.2.4. One-Sided Triple Method

Note that, in the protocols above and also other applications in practice, the parties often need to compute a matrix product of two matrices, where one is fully determined by one party’s input and the other one is also fully determined by the other party’s input. In this case, to securely compute an outer product, instead of using the original tensor triple method, they can simply invoke one subfield VOLE to generate a “one-sided” triple and use a multiple of these triples to accomplish the secure computation. More specifically, let ( a , V ) and ( b , U ) be the distributed VOLE to the participants P 1 and P 2 such that U = V + a b . When the parties want to securely compute the outer product x y , where x is determined by P 1 and y is determined by P 2 , the parties can announce s = a + x and t = b + y , respectively. Then, since
x y = ( s a ) ( t b ) = ( s t a t + U ) + ( s b + V ) ,
the two parties can then output the values enclosed in two parentheses, respectively. Compared to the tensor triple method, a one-sided triple only requires a single invocation of the subfield VOLE protocol.

4.2.5. Joint Biometric Identification

In practice, we also consider the scenario where several parties would like to jointly identify a union of individual data, such as a criminal investigation by a joint investigation team or model training in federated learning. Formally speaking, consider that the clients P 1 , . . . , P n would like to jointly launch a query to the server S, and each P i holds a batch of queries
X ( i ) = ( x 1 ( i ) , . . . , x i j ( i ) )
while the server S holds a dataset of references Y = ( y 1 , . . . y n ) . Instead of launching the secure Euclidean distance computation protocol separately, they may also realize the secure computation in a single packaged protocol, where each P i holds ( 0 , . . . , 0 , x 1 ( i ) , . . . , x i j ( i ) , 0 , . . . , 0 ) as the share of the joint input
X = ( X ( 1 ) | . . . | X ( n ) )
Then, they do not need an extra secret sharing step, and they can directly compute [ D X ] i = D X ( i ) without interaction. Note that the participants could individually invoke protocols with the server and obtain the same efficiency if we only consider the online phase. However, if the triples have been generated beforehand for all parties, they cannot be used in these one-on-one communications with the server and, therefore, extra triple generation steps are required.

4.3. Privacy-Preserving Machine Learning

In many widely used machine learning training functionalities, such as linear regression (both the gradient descent method and the direct least-square method) and ridge regression, principal component analysis (PCA), fully connected layers in a convolutional neural network (CNN), matrix multiplication, and so on, the matrix multiplication procedure dominates the overhead of the overall complexity. Secure multi-party matrix multiplication may simply be realized using Beaver triples, but for a matrix multiplication of size ( m , k ) by ( k , n ) , we need k ( m , n ) -triples or m n k Beaver triples. While the cost may not seem to differ much in low dimensions, the cost of Beaver triples will increase quadratically as the size of the matrices increases. As an example, to carry out a secure multi-party matrix multiplication between two 1024 × 1024 matrices over Z 2 32 , we need 2 30 Beaver triples in total. Even the generation process of this many Beaver triples is a burden to all the parties. For instance, if we use an RLWE-based method [42] for Beaver triple generation, the communication cost reaches an astonishing 256TB. Meanwhile, the parties only need to consume 1024 of ( 1024 , 1024 ) -tensor triples to accomplish the computation, which only yields an approximately 115 GB communication cost for the silent OT method. Due to the batch generation nature of sVOLE, it is much easier to generate tensor triples of the required amount.

4.3.1. Matrix-Triple-Based MPC

Apart from the classical Beaver triple method, previous works such as SecureML [24] and SecureNN [25] apply the matrix triple method to fulfill secure matrix multiplication functionality. More specifically, the parties generate matrix triples of the form ( [ X ] , [ Y ] , [ Z ] ) , where Z = X Y , and later use them to mask input matrices. This seems a straightforward and convenient way to accomplish the requirement, but no longer has the crucial pre-computability property obtained by the original Beaver method. To be specific, if the parties choose to generate classical matrix triples ( [ X ] , [ Y ] , [ Z ] = [ X · Y ] ) to assist the computation of matrix multiplication, the parties must choose the dimensions of X , Y , Z to be exactly the ones used in the online phase. This is because a matrix triple cannot be trimmed to form a matrix triple of smaller dimensions, which is shown according to the following block matrix computation:
X trim X 12 X 21 X 22 Y trim Y 12 Y 21 Y 22 = X trim Y trim + X 12 Y 21 Z trim
This dispossession of the flexibility is definitely not insignificant. In practice, the parties may not be able to know the sizes of the matrices involved in the MPC protocol beforehand. The matrices used in the MPC protocol may vary in sizes and dimensions, so there is no universal way for triple generation. In short, for the matrix triple method, the triples cannot be generated in the offline phase in general.
On the other hand, the tensor multiplication triples are more flexible and applicable for matrix operations. As mentioned in previous sections, any ( n , n ) -triple can be trimmed to serve as two triples of size ( s , t ) and ( n s , n t ) , respectively, for any s , t < n . Therefore, secure multi-party matrix operations with different matrix sizes can be achieved using tensor triples of a universal size. Also, when the tensor triple technique is applied to achieve the secure multi-party matrix multiplication A · B , the number of columns of A (which equals the number of rows of B) can be arbitrary. This is extremely convenient in most applications, for example, the ones we will introduce in Section 4. Hence, the tensor triple method still provides the pre-computability property just as the original Beaver method, as the method does not require the parties to know anything about k while still being able to utilize the pre-computed triples.

4.3.2. Optimization for Horizontal Data

We also consider a practical scenario where a matrix is formulated by a concatenation of separate datasets from multiple parties. This often happens when the data are of the vector form and hence a concatenation of numerous data naturally forms a matrix. In this case, the parties do not need to re-share their private inputs, but can hold the data directly as the sharing of the result matrix. This can save a lot of communication cost especially when dealing with large-scale datasets.

5. Implementation and Performance

In this chapter, our implementations are based on C++. All experiments, including those in references, have been run by ourselves on a desktop with AMD 3950X CPU and 48 GB RAM. We consider both simulated LAN and WAN environments with 500 mbps bandwidth and 20 ms one-way latency. The protocols are suitable for multi-threading by parallel computation, but we measure the performance in a single-thread setting. The experiments are executed 10 times and the medians of the results are presented in the following tables. Source codes have been provided at https://github.com/lzjluzijie/triple (accessed on 1 July 2025).

5.1. Tensor Triple Generation

We implement both the correlated-OT-based protocol and silent-OT-based protocol for RsVOLE . We also use the libOTe library to fulfill basic functionalities, such as COT and OT extensions. For the silent OT, we use the expand-convolute code from [36] with expander weight 7 and convolution state size 24. The hamming weight of the sparse vector in this setting is 400, which means that each silent VOLE requires an execution of an OT-extension-based subfield VOLE of size 400.
We present the performance as well as the corresponding communication cost for each method in Table 1, Table 2 and Table 3. It can be seen from the tables that the COT method is more efficient when tensor triples of small sizes are needed while the SOT method allows for the generation of tensor triples of large sizes with a moderate communication cost.
We present here also a high-level analysis of the two methods. In small cases, the COT-based generation method is straightforward and faster, while it may take a considerable amount of time for the SOT-based method to finish the structure building for the protocol. When one deals with matrices of large dimensions or needs a large amount of tensor triples, since there is a linear overhead in the communication cost of the COT-based generation method, the data transfer may become intolerable for the parties to generate these triples, while, on the other hand, as the communication cost of the SOT-based generation method is sublinear asymptotically, it requires much less communication to generate all the necessary tensor triples.

5.2. Matrix Multiplication

Table 4 shows the performance and communication cost of the online phase for our implementation of the tensor-triple-based secure multi-party matrix multiplication protocol and a comparison of the classical matrix method used in other works on secure machine learning (for instance, [24,25]), and we use the server-aided method described in [25], which we consider to be the fastest implementation for the matrix method. In this table, we do not consider the generation procedure of the matrix triples as in the pre-processing phase as we do not presume the knowledge of the exact dimensions of matrices used in the MPC protocol.
As a comparison with previous works [15,27,43], we also provide Table 5 on the performance of secure multi-party square matrix multiplication protocols. Due to a lack of source codes or problems in code running, we were not able to individually launch experiments in these previous works under the same environment, but we believe that the statistics in the table already imply the high efficiency of the tensor triple method. Note that our experiments are executed with a single thread, and matrix multiplication is known to be highly parallelizable. The computation can be fulfilled easily using parallel computation. Although not given explicitly in the performance tables, parallel computation is capable of significantly reducing the overall computational cost. Hence, our performance significantly outperforms all previous works listed in the table. This is reasonable as, in these previous works, homomorphic encryption is heavily used, while, in our work, we have mainly applied VOLE, which is somewhat lightweight comparatively.

5.3. Batched Privacy-Preserving Biometric Identification

In this section we present the performance of tensor-triple-based implementations of batched queries of FingerCode [21] and Eigenfaces [22] protocols with a comparison of the efficiency with the GSHADE [6] protocol, and FaceNet [23] with a comparison with the [44] protocol. For FingerCode, we use 640-dimensional vectors of 8-bit elements, and we use 32 bits to record each square Euclidean distance. For EigenFaces, we use 10304-dimensional vectors of 8-bit elements, and we use 32 bits to record each square Euclidean distance. For FaceNet, the database consists of 128-dimensional vectors of elements with floating point precision, but a truncation will be applied to all of the elements to map them into 8-bit strings, and each final square Euclidean distance consumes 64 bits. It can be seen from the comparison that the tensor triple significantly accelerates the identification process. The data for the performance of FingerCode and FaceNet protocols are collected individually according to our experiments. The experiments for all implementations are run in the same environment introduced at the beginning of this chapter.
Table 6 shows a comparison between the performances of our FingerCode implementation with COT-based triple generation and the one in [6]. Clearly, the tensor triple method achieves a much better online performance according to the comparison. It is worth noting that, while the proposed method significantly reduces online computation time, it induces higher communication overhead in the offline phase as a trade-off. Therefore, the tensor triple method is suitable for the stateful protocols where the offline cost is not a major concern compared to the interaction phase.
Table 7 shows the performance of our implementation for the Eigenfaces protocol with COT-based triple generation. As a comparison to the performance in [6], a single query for the N = 320 case would take 0.6 s to fulfill, and the corresponding communication cost is 7.7 MB. When N = 1000 , a single query takes 1.6 s and costs 9.4 MB. Although the performance in [6] also takes the secure comparison step into consideration, it can still clearly be seen that the tensor triple method behaves much better for batched queries.
Table 8 shows a comparison between the performances of our FaceNet implementation and the one in [44]. We achieve a significant speedup by around 10,000 times.
One may argue that there is a pre-computation cost for the tensor triple method. We shall elaborate here with two points. First, even if one considers the generation step, the tensor triple method still performs faster under almost all circumstances, as shown in the tables. Second, as we have pointed out, the tensor triple method truly enables the possibility of a pre-computation process in secure multi-party matrix computation. Compared to other protocols, for instance, GSHADE for the FingerCode protocol, there is no valid way to split the protocol into an online and offline process, as the dimensions of the matrices are already involved in their fundamental components, such as the OT. Therefore, the comparisons in the tables that we listed should be considered as fair.

6. Conclusions

The tensor triple is a new kind of correlation that is very suitable for multi-dimensional MPC due to its high efficiency and pre-computability. It can be used to accelerate and optimize many existing privacy-preserving biometric identification protocols and privacy-preserving machine learning protocols, which mainly involve vector and matrix operations. Compared to the classical matrix method, the tensor triple method has the pre-computability property as required in many real-world applications, and, compared to the classical Beaver triple method, the tensor triple method obtains high efficiency in both computation and communication.

Author Contributions

Conceptualization: D.W., B.L. and J.D.; methodology, D.W.; validation, D.W., B.L. and J.D.; implementation and coding, Z.L.; formal analysis, D.W.; writing—original draft preparation, D.W. and Z.L.; writing—review and editing, D.W., B.L., Z.L. and J.D.; project administration, J.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used in experiments are all generated randomly. No external data are involved. The source code has been made public on GitHub at https://github.com/lzjluzijie/triple (accessed on 1 July 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Erkin, Z.; Franz, M.; Guajardo, J.; Katzenbeisser, S.; Lagendijk, I.; Toft, T. Privacy-Preserving Face Recognition. In Proceedings of the 9th Privacy Enhancing Technologies Symposium (PETS 2009), Seattle, WA, USA, 5–7 August 2009; pp. 235–253. [Google Scholar]
  2. Blanton, M.; Gasti, P. Secure and Efficient Protocols for Iris and Fingerprint Identification. In Proceedings of the 16th European Symposium on Research in Computer Security, Leuven, Belgium, 12–14 September 2011; pp. 190–209. [Google Scholar]
  3. Huang, Y.; Malka, L.; Evans, D.; Katz, J. Efficient privacy-preserving biometric identification. In Proceedings of the 17th Conference Network and Distributed System Security Symposium, San Diego, CA, USA, 6–9 February 2011. [Google Scholar]
  4. Shahandashti, S.F.; Safavi-Naini, R.; Ogunbona, P. Private Fingerprint Matching. In Proceedings of the 17th Australasian Conference on Information Security and Privacy, Wollongong, Australia, 9–11 July 2012; pp. 426–433. [Google Scholar]
  5. Bringer, J.; Chabanne, H.; Patey, A. Privacy-Preserving Biometric Identification Using Secure Multiparty Computation: An Overview and Recent Trends. IEEE Signal Process. Mag. 2013, 30, 42–52. [Google Scholar] [CrossRef]
  6. Bringer, J.; Chabanne, H.; Favre, M.; Patey, A.; Schneider, T.; Zohner, M. GSHADE: Faster Privacy-Preserving Distance Computation and Biometric Identification. In Proceedings of the 2nd ACM Workshop on Information Hiding and Multimedia Security, Salzburg, Austria, 11–13 June 2014; pp. 187–198. [Google Scholar]
  7. Hahn, C.; Hur, J. Efficient and privacy-preserving biometric identification in cloud. ICT Express 2016, 2, 135–139. [Google Scholar] [CrossRef]
  8. Gomez-Barrero, M.; Galbally, J.; Morales, A.; Fierrez, J. Privacy-Preserving Comparison of Variable-Length Data with Application to Biometric Template Protection. IEEE Access 2017, 5, 8606–8619. [Google Scholar] [CrossRef]
  9. Ma, Z.; Liu, Y.; Liu, X.; Ma, J.; Ren, K. Lightweight Privacy-Preserving Ensemble Classification for Face Recognition. IEEE Internet Things J. 2019, 6, 5778–5790. [Google Scholar] [CrossRef]
  10. Nikolaenko, V.; Weinsberg, U.; Ioannidis, S.; Joye, M.; Boneh, D.; Taft, N. Privacy-Preserving Ridge Regression on Hundreds of Millions of Records. In Proceedings of the IEEE Symposium on Security and Privacy, San Francisco, CA, USA, 19–22 May 2013; pp. 334–348. [Google Scholar]
  11. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical Secure Aggregation for Privacy-Preserving Machine Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 1175–1191. [Google Scholar]
  12. Mohassel, P.; Rindal, P. ABY3: A Mixed Protocol Framework for Machine Learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 22018; pp. 35–52. [Google Scholar]
  13. Mohassel, P.; Rosulek, M.; Trieu, N. Practical Privacy-Preserving K-means Clustering. Proc. Priv. Enhancing Technol. 2020, 2020, 414–433. [Google Scholar] [CrossRef]
  14. Koti, N.; Pancholi, M.; Patra, A.; Suresh, A. SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning. In Proceedings of the 30th USENIX Security Symposium (USENIX Security 21), Online, 11–13 August 2021; pp. 2651–2668. [Google Scholar]
  15. Chen, H.; Kim, M.; Razenshteyn, I.P.; Rotaru, D.; Song, Y.; Wagh, S. Maliciously Secure Matrix Multiplication with Applications to Private Deep Learning. In Proceedings of the Advances in Cryptology—ASIACRYPT 2020, Daejeon, Republic of Korea, 7–11 December 2020; pp. 31–59. [Google Scholar]
  16. Abspoel, M.; Cramer, R.; Damgard, I.; Escudero, D.; Rambaud, M.; Xing, C.; Yuan, C. Asymptotically Good Multiplicative LSSS over Galois Rings and Applications to MPC over Z/pkZ. In Proceedings of the Advances in Cryptology—ASIACRYPT 2020, Daejeon, Republic of Korea, 7–11 December 2020; pp. 151–180. [Google Scholar]
  17. Naor, M.; Pinkas, B. Oblivious Polynomial Evaluation. SIAM J. Comput. 2006, 35, 1254–1281. [Google Scholar] [CrossRef]
  18. Applebaum, B.; Damgard, I.; Ishai, Y.; Nielsen, M.; Zichron, L. Secure Arithmetic Computation with Constant Computational Overhead. In Proceedings of the Advances in Cryptology—CRYPTO 2017, Santa Barbara, CA, USA, 20–24 August 2017; pp. 223–254. [Google Scholar]
  19. Boyle, E.; Couteau, G.; Gilboa, N.; Ishai, Y. Compressing Vector OLE. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 2018; pp. 896–912. [Google Scholar]
  20. Boyle, E.; Couteau, G.; Gilboa, N.; Ishai, Y.; Kohl, L.; Rindal, P.; Scholl, P. Efficient Two-Round OT Extension and Silent Non-Interactive Secure Computation. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11–15 November 2019; pp. 291–308. [Google Scholar]
  21. Jain, A.K.; Prabhakar, S.; Hong, L.; Pankanti, S. FingerCode: A filterbank for fingerprint representation and matching. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), Fort Collins, CO, USA, 23–25 June 1999; Volume 2, pp. 187–193. [Google Scholar]
  22. Turk, M.; Pentland, A. Eigenfaces for Recognition. J. Cogn. Neurosci. 1991, 3, 71–86. [Google Scholar] [CrossRef] [PubMed]
  23. Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
  24. Mohassel, P.; Zhang, Y. SecureML: A System for Scalable Privacy-Preserving Machine Learning. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; pp. 19–38. [Google Scholar] [CrossRef]
  25. Wagh, S.; Gupta, D.; Chandran, N. SecureNN: Efficient and Private Neural Network Training. IACR Cryptol. ePrint Arch. 2018. Available online: https://eprint.iacr.org/2018/442 (accessed on 1 July 2025).
  26. Wu, D.; Liang, B.; Lu, Z.; Ding, J. Efficient Secure Multi-party Computation for Multi-dimensional Arithmetics and Its Application in Privacy-Preserving Biometric Identification. In Proceedings of the 23rd International Conference on Cryptology and Network Security, Cambridge, UK, 24–27 September 2024; pp. 3–25. [Google Scholar]
  27. Mono, J.; Güneysu, T. Implementing and Optimizing Matrix Triples with Homomorphic Encryption. In Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security, Melbourne, Australia, 10–14 July 2023; pp. 29–40. [Google Scholar]
  28. Rabin, M.O. How To Exchange Secrets with Oblivious Transfer. IACR Cryptol. ePrint Arch. 1981. Available online: https://eprint.iacr.org/2005/187.pdf (accessed on 1 July 2025).
  29. Ishai, Y.; Kilian, J.; Nissim, K.; Petrank, E. Extending Oblivious Transfers Efficiently. In Proceedings of the Advances in Cryptology—CRYPTO 2003, Santa Barbara, CA, USA, 17–21 August 2003; pp. 145–161. [Google Scholar]
  30. Kolesnikov, V.; Kumaresan, R. Improved OT Extension for Transferring Short Secrets. In Proceedings of the Advances in Cryptology—CRYPTO 2013, Santa Barbara, CA, USA, 18–22 August 2013; pp. 54–70. [Google Scholar]
  31. Kolesnikov, V.; Kumaresan, R.; Rosulek, M.; Trieu, N. Efficient Batched Oblivious PRF with Applications to Private Set Intersection. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 818–829. [Google Scholar]
  32. Asharov, G.; Lindell, Y.; Schneider, T.; Zohner, M. More Efficient Oblivious Transfer and Extensions for Faster Secure Computation. In Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, Berlin, Germany, 4–8 November 2013; pp. 535–548. [Google Scholar]
  33. Yang, K.; Weng, C.; Lan, X.; Zhang, J.; Wang, X. Ferret: Fast Extension for Correlated OT with Small Communication. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual, 9–13 November 2020; pp. 1607–1626. [Google Scholar]
  34. Schoppmann, P.; Gascón, A.; Reichert, L.; Raykova, M. Distributed Vector-OLE: Improved Constructions and Implementation. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11–15 November 2019; pp. 1055–1072. [Google Scholar]
  35. Couteau, G.; Rindal, P.; Raghuraman, S. Silver: Silent VOLE and Oblivious Transfer from Hardness of Decoding Structured LDPC Codes. In Proceedings of the Advances in Cryptology—CRYPTO 2021, Virtual, 16–20 August 2021; pp. 502–534. [Google Scholar]
  36. Raghuraman, S.; Rindal, P.; Tanguy, T. Expand-Convolute Codes for Pseudorandom Correlation Generators from LPN. In Proceedings of the Advances in Cryptology—CRYPTO 2023, Santa Barbara, CA, USA, 20–24 August 2023; pp. 602–632. [Google Scholar]
  37. Gilboa, N. Two Party RSA Key Generation. In Proceedings of the Advances in Cryptology—CRYPTO’ 99, Santa Barbara, CA, USA, 15–19 August 1999; pp. 116–129. [Google Scholar]
  38. Demmler, D.; Schneider, T.; Zohner, M. ABY—A Framework for Efficient Mixed-Protocol Secure Two-Party Computation. In Proceedings of the Network and Distributed System Security Symposium, San Diego, CA, USA, 8–11 February 2015. [Google Scholar]
  39. Smart, N.P.; Tanguy, T. TaaS: Commodity MPC via Triples-as-a-Service. In Proceedings of the 2019 ACM SIGSAC Conference on Cloud Computing Security Workshop, London, UK, 11–15 November 2019. [Google Scholar]
  40. Muth, P.; Katzenbeisser, S. Assisted MPC. Cryptol. ePrint Arch. 2022. Available online: https://eprint.iacr.org/2022/1453 (accessed on 1 July 2025).
  41. Wu, D. Highly Efficient Server-Aided Multiparty Subfield VOLE Distribution Protocol. Cryptol. ePrint Arch. 2025. Available online: https://eprint.iacr.org/2025/029 (accessed on 1 July 2025).
  42. Rathee, D.; Schneider, T.; Shukla, K.K. Improved Multiplication Triple Generation over Rings via RLWE-Based AHE. In Proceedings of the Cryptology and Network Security, Fuzhou, China, 25–27 October 2019; pp. 347–359. [Google Scholar]
  43. Damg<b>a</b>rd, I.; Pastro, V.; Smart, N.; Zakarias, S. Multiparty Computation from Somewhat Homomorphic Encryption. In Proceedings of the Advances in Cryptology—CRYPTO 2012, Santa Barbara, CA, USA, 19–23 August 2012; pp. 643–662. [Google Scholar]
  44. Naresh Boddeti, V. Secure Face Matching Using Fully Homomorphic Encryption. In Proceedings of the 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), Redondo Beach, CA, USA, 22–25 October 2018; pp. 1–10. [Google Scholar]
Figure 1. Functionality of RCOT and RsVOLE.
Figure 1. Functionality of RCOT and RsVOLE.
Cryptography 09 00050 g001
Figure 2. Functionality of tensor triple generation.
Figure 2. Functionality of tensor triple generation.
Cryptography 09 00050 g002
Figure 3. Functionality of RsVOLE’.
Figure 3. Functionality of RsVOLE’.
Cryptography 09 00050 g003
Figure 4. sVOLE-based tensor triple generation.
Figure 4. sVOLE-based tensor triple generation.
Cryptography 09 00050 g004
Figure 5. COT-based RsVOLE.
Figure 5. COT-based RsVOLE.
Cryptography 09 00050 g005
Figure 6. Functionality of secure outer product.
Figure 6. Functionality of secure outer product.
Cryptography 09 00050 g006
Figure 7. Secure multi-party outer product.
Figure 7. Secure multi-party outer product.
Cryptography 09 00050 g007
Figure 8. Functionality of secure matrix product.
Figure 8. Functionality of secure matrix product.
Cryptography 09 00050 g008
Figure 9. Secure multi-party matrix product.
Figure 9. Secure multi-party matrix product.
Cryptography 09 00050 g009
Figure 10. Triple amalgamation algorithm.
Figure 10. Triple amalgamation algorithm.
Cryptography 09 00050 g010
Table 1. Performance of COT-based 32-bit ( m , n ) -tensor triple generation (in milliseconds). For each size, we generate 1 , 2 5 , 2 10 triples (arranged in rows).
Table 1. Performance of COT-based 32-bit ( m , n ) -tensor triple generation (in milliseconds). For each size, we generate 1 , 2 5 , 2 10 triples (arranged in rows).
m n 2 3 2 8 2 10 2 14
LAN WAN LAN WAN LAN WAN LAN WAN
2 3 10188103121239453985
143172364960159674819,153
287583656715,354177145,57722,378597,527
2 8 296541081603390622,553
4099356173737,37985,391602,922
12,603309,49957,2741,181,690
2 10 638553022,32392,565
14,003149,363567,9172,665,457
425,5204,732,780
Table 2. Performance of SOT-based 32-bit ( m , n ) -tensor triple generation (in milliseconds). For each size, we generate 1 , 2 5 , 2 10 triples (arranged in rows).
Table 2. Performance of SOT-based 32-bit ( m , n ) -tensor triple generation (in milliseconds). For each size, we generate 1 , 2 5 , 2 10 triples (arranged in rows).
m n 2 3 2 8 2 10 2 14
LAN WAN LAN WAN LAN WAN LAN WAN
2 3 1045299352893576193691
38654554376546513840469079227972
123,127143,559122,763143,203122,937140,101257,346267,526
2 8 593160359116467631733
24,53233,19524,53432,77431,09637,629
786,7901,047,499788,4861,052,9551,006,9831,193,853
2 10 2271485026475252
92,154137,287104,976142,970
2,991,0834,441,2993,395,6994,606,848
Table 3. Communication cost of 32-bit ( m , n ) -tensor triple generation (in megabytes). For each size, we test the cost for 1 , 2 5 , 2 10 triples (arranged in rows).
Table 3. Communication cost of 32-bit ( m , n ) -tensor triple generation (in megabytes). For each size, we test the cost for 1 , 2 5 , 2 10 triples (arranged in rows).
m n 2 3 2 8 2 10 2 14
COT SOT COT SOT COT SOT COT SOT
2 3 0.031.000.521.002.021.0032.021.23
0.7632.8616.2632.8664.2632.861024.2639.31
12.10525.82260.10525.821028.10525.8216,388.08628.95
2 8 16.2628.7964.2628.791024.2628.99
520.01921.212056.01921.2132,776.01927.65
8320.0814,739.3632,896.0814,739.36––14,842.48
2 10 257.01114.764097.02114.96
8224.013672.21131,104.043678.65
130,969.6058,755.36––58,858.48
Table 4. Performance and communication cost of triple-based 32-bit ( m , k ) × ( k , n ) matrix multiplication time (in milliseconds and megabytes, resp.).
Table 4. Performance and communication cost of triple-based 32-bit ( m , k ) × ( k , n ) matrix multiplication time (in milliseconds and megabytes, resp.).
k m = n 2 3 2 5 2 8 2 10
Online LAN WAN Com. LAN WAN Com. LAN WAN Com. LAN WAN Com.
2 3 Tensor1260.0011260.0041270.0311440.13
Matrix21420.00231420.01032670.30775294.19
2 8 Tensor1260.0311370.12522621.002923234.00
Matrix31430.04732250.191925881.754076522010.00
2 10 Tensor1370.1252440.500711284.001062128916.00
Matrix32250.18883910.7543728966.2512,79915,13428.00
Table 5. Comparison of our performance with previous works on 128-bit square matrix multiplication of size d × d (single thread by default, LAN environment, in seconds).
Table 5. Comparison of our performance with previous works on 128-bit square matrix multiplication of size d × d (single thread by default, LAN environment, in seconds).
d[15] (16 thrds.)[15]SPDZ [43][27] (16 thrds.)OursOurs (Offline)
1285.90361283.090.010.51
25625.5021490013.490.052.82
38468.30654280833.600.169.22
512138.001470630067.390.4018.94
1024870.0010,38044,100395.204.32143.36
Table 6. Performance of secure squared Euclidean distance computation in batched FingerCode protocol (128 queries, LAN environment).
Table 6. Performance of secure squared Euclidean distance computation in batched FingerCode protocol (128 queries, LAN environment).
n = 128 n = 1024
Protocol[6]OursOurs (Offine)[6]OursOurs (Offline)
Time (s)154.370.021.90176.640.0815.54
Comm. (MB)1688.711.252640.025379.245.6320,560.00
Table 7. Performance of Eigenfaces protocol without the secure comparison step (80 queries, LAN environment).
Table 7. Performance of Eigenfaces protocol without the secure comparison step (80 queries, LAN environment).
N = 320 N = 1000
ProtocolOursOurs (Offline)OursOurs (Offline)
Time (s)0.0341.440.03124.24
Communication (MB)14.5765,207.0014.69202,057.00
Table 8. Performance (time in seconds, LAN environment) of secure squared Euclidean distance computation in batched FaceNet protocol for m queries against a database of n references.
Table 8. Performance (time in seconds, LAN environment) of secure squared Euclidean distance computation in batched FaceNet protocol for m queries against a database of n references.
( m , n ) [44]Ours (Online)Ours (Offline, COT)Ours (Offline, SOT)
( 2 4 , 2 4 ) 2.58<0.010.02-
( 2 4 , 2 10 ) 165.950.010.3812.00
( 2 10 , 2 10 ) 10,559.680.2519.461219.05
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, D.; Liang, B.; Lu, Z.; Ding, J. Efficient Secure Multi-Party Computation for Multi-Dimensional Arithmetics and Its Applications. Cryptography 2025, 9, 50. https://doi.org/10.3390/cryptography9030050

AMA Style

Wu D, Liang B, Lu Z, Ding J. Efficient Secure Multi-Party Computation for Multi-Dimensional Arithmetics and Its Applications. Cryptography. 2025; 9(3):50. https://doi.org/10.3390/cryptography9030050

Chicago/Turabian Style

Wu, Dongyu, Bei Liang, Zijie Lu, and Jintai Ding. 2025. "Efficient Secure Multi-Party Computation for Multi-Dimensional Arithmetics and Its Applications" Cryptography 9, no. 3: 50. https://doi.org/10.3390/cryptography9030050

APA Style

Wu, D., Liang, B., Lu, Z., & Ding, J. (2025). Efficient Secure Multi-Party Computation for Multi-Dimensional Arithmetics and Its Applications. Cryptography, 9(3), 50. https://doi.org/10.3390/cryptography9030050

Article Metrics

Back to TopTop