An Efficient Homomorphic Argmax Approximation for Privacy-Preserving Neural Networks
Abstract
:1. Introduction
- We propose an efficient homomorphic argmax approximate algorithm by employing a tree-structured comparison to determine the position of the maximum value. The algorithm consists of four phases: rotation accumulation, tree-structured comparison, normalization, and finalization. Since the computation primarily involves homomorphic sign and rotation operations, we compared the theoretical counts of SgnHE and rotation operations to validate the efficiency improvement. By conducting this comparative analysis, our offered a slight increase in accuracy while significantly reducing the inference latency by .
- We integrated the proposed homomorphic argmax algorithm into a PPNN and evaluated our approach on two datasets, namely, MNIST and CIFAR-10. Compared with a PPNN with ArgmaxHE, our PPNN with offered a slight increase in accuracy while significantly reducing the inference latency for MNIST and CIFAR-10. This not only improves the user experience by delivering faster results but also enlarges the scope of applications for a PPNN by making it feasible to deploy in time-sensitive scenarios.
2. Related Works
3. Preliminaries
3.1. RNS-CKKS
- Addition: Slot-wise addition ⊕ of message vectors.
- Subtraction: Slot-wise subtraction ⊖ of message vectors.
- Multiplication: Slot-wise multiplication ⊙ of message vectors.
- Left rotation: Cyclic left rotation shifting slot positions.
- Right rotation: Cyclic right rotation shifting slot positions.
3.2. Composite Polynomial for Argmax in Phoenix
Algorithm 1 ArgmaxHE in Phoenix |
|
4. An Efficient Homomorphic Argmax Approximation
4.1. The Proposed Algorithm
- Phase I: rotation accumulation (line 7 to line 9).In this phase, all numbers in the ciphertext are accumulated though rotations.
- Phase II: tree-structured comparison (line 10 to line 12).Initially, all numbers are divided into several small groups by , and each group contains numbers, where . The purpose of this step is to make only the first number in each group remain unchanged, while the rest are set to 0.Then, pairs of small groups are combined to form a larger group, and comparisons are made to determine which of the two small groups is larger within each large group. If the first small group is larger, the first number in the large group is set to 1, with the remaining numbers being 0. If the second small group is larger, the first number in the large group is set to −1, with the rest of the numbers being 0.
- Phase III: normalization (line 13 to line 16).Normalization is involved to obtain standardized comparison results for each group. In the large group, at the position with the maximum, the corresponding number in is set to 2, and the others are set to 0.
- Phase IV: finalization (line 20 to line 21).To finalize the process, we subtract from each element in . Subsequently, the SgnHE function is applied to transform the results into a logit composed solely of 1 and , which are transformed by multiplying each element by 0.5 and adding 0.5, resulting in a one-hot logit, where the only 1 corresponds to the maximum in .
Algorithm 2 Improved Homomorphic Argmax when |
|
4.2. Theoretical Analysis
5. Experimental Results
5.1. Experimental Platform
5.2. Experimental Design
5.3. Experimental Analysis
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Gilad-Bachrach, R.; Dowlin, N.; Laine, K.; Lauter, K.; Naehrig, M.; Wernsing, J. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 201–210. [Google Scholar]
- Chabanne, H.; De Wargny, A.; Milgram, J.; Morel, C.; Prouff, E. Privacy-preserving classification on deep neural network. Cryptol. ePrint Arch. 2017. Available online: https://eprint.iacr.org/2017/035.pdf (accessed on 10 February 2024).
- Hesamifard, E.; Takabi, H.; Ghasemi, M. Cryptodl: Deep neural networks over encrypted data. arXiv 2017, arXiv:1711.05189. [Google Scholar]
- Wu, W.; Liu, J.; Wang, H.; Tang, F.; Xian, M. Ppolynets: Achieving high prediction accuracy and efficiency with parametric polynomial activations. IEEE Access 2018, 6, 72814–72823. [Google Scholar] [CrossRef]
- Lee, J.; Lee, E.; Lee, J.W.; Kim, Y.; Kim, Y.S.; No, J.S. Precise approximation of convolutional neural networks for homomorphically encrypted data. IEEE Access 2023, 11, 62062–62076. [Google Scholar] [CrossRef]
- Jovanovic, N.; Fischer, M.; Steffen, S.; Vechev, M. Private and reliable neural network inference. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, Los Angeles, CA, USA, 7–11 November 2022; pp. 1663–1677. [Google Scholar]
- Lee, J.W.; Kang, H.; Lee, Y.; Choi, W.; Eom, J.; Deryabin, M.; Lee, E.; Lee, J.; Yoo, D.; Kim, Y.S.; et al. Privacy-preserving machine learning with fully homomorphic encryption for deep neural network. IEEE Access 2022, 10, 30039–30054. [Google Scholar] [CrossRef]
- Truong, J.B.; Maini, P.; Walls, R.J.; Papernot, N. Data-free model extraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4771–4780. [Google Scholar]
- Dathathri, R.; Kostova, B.; Saarikivi, O.; Dai, W.; Laine, K.; Musuvathi, M. EVA: An encrypted vector arithmetic language and compiler for efficient homomorphic computation. In Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation, London, UK, 15–20 June 2020; pp. 546–561. [Google Scholar]
- Lou, Q.; Jiang, L. HEMET: A homomorphic-encryption-friendly privacy-preserving mobile neural network architecture. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 7102–7110. [Google Scholar]
- Cheon, J.H.; Kim, D.; Kim, D.; Lee, H.H.; Lee, K. Numerical method for comparison on homomorphically encrypted numbers. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Kobe, Japan, 8–12 December 2019; pp. 415–445. [Google Scholar]
- Cheon, J.H.; Kim, D.; Kim, D. Efficient homomorphic comparison methods with optimal complexity. In Proceedings of the Advances in Cryptology–ASIACRYPT 2020: 26th International Conference on the Theory and Application of Cryptology and Information Security, Daejeon, Republic of Korea, 7–11 December 2020; pp. 221–256. [Google Scholar]
- Boemer, F.; Cammarota, R.; Demmler, D.; Schneider, T.; Yalame, H. MP2ML: A mixed-protocol machine learning framework for private inference. In Proceedings of the 15th International Conference on Availability, Reliability and Security, Virtual, 25–28 August 2020; pp. 1–10. [Google Scholar]
- Cheon, J.H.; Kim, A.; Kim, M.; Song, Y. Homomorphic encryption for arithmetic of approximate numbers. In Proceedings of the Advances in Cryptology–ASIACRYPT 2017: 23rd International Conference on the Theory and Applications of Cryptology and Information Security, Hong Kong, China, 3–7 December 2017; pp. 409–437. [Google Scholar]
- Cheon, J.H.; Han, K.; Kim, A.; Kim, M.; Song, Y. A full RNS variant of approximate homomorphic encryption. In Proceedings of the Selected Areas in Cryptography–SAC 2018: 25th International Conference, Calgary, AB, Canada, 15–17 August 2018; pp. 347–368. [Google Scholar]
- Microsoft SEAL, (Release 3.5); Microsoft Research: Redmond, WA, USA, 2020. Available online: https://github.com/Microsoft/SEAL(accessed on 10 February 2024).
Algorithms | The Number of SgnHEs | The Number of Rotations |
---|---|---|
[6] | ||
Algorithm | Accuracy | Time (ms) |
---|---|---|
[6] | 0.999426 | 285,169 |
0.999459 | 119,639 |
Schemes | Dataset | Accuracy | Time (ms) |
---|---|---|---|
MNIST | 0.9531 | 248,636 | |
MNIST | 0.9393 | 427,621 | |
MNIST | 0.9412 | 358,733 | |
CIFAR-10 | 0.4538 | 2,314,975 | |
CIFAR-10 | 0.4446 | 2,493,693 | |
CIFAR-10 | 0.4467 | 2,425,042 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, P.; Duan, A.; Lu, H. An Efficient Homomorphic Argmax Approximation for Privacy-Preserving Neural Networks. Cryptography 2024, 8, 18. https://doi.org/10.3390/cryptography8020018
Zhang P, Duan A, Lu H. An Efficient Homomorphic Argmax Approximation for Privacy-Preserving Neural Networks. Cryptography. 2024; 8(2):18. https://doi.org/10.3390/cryptography8020018
Chicago/Turabian StyleZhang, Peng, Ao Duan, and Hengrui Lu. 2024. "An Efficient Homomorphic Argmax Approximation for Privacy-Preserving Neural Networks" Cryptography 8, no. 2: 18. https://doi.org/10.3390/cryptography8020018