Next Article in Journal
Design of Self-Optimizing Polynomial Neural Networks with Temporal Feature Enhancement for Time Series Classification
Previous Article in Journal
A Quasi-Uniform Magnetic Coupling Array for a Multiload Wireless Power Transfer System with Flexible Configuration Strategies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Efficient Quantization and Data Access for Accelerating Homomorphic Encrypted CNNs

by
Kai Chen
1,2,
Xinyu Wang
1,
Yuxiang Fu
1,* and
Li Li
1,*
1
School of Electronic Science and Engineering, Nanjing University, Nanjing 210023, China
2
Jiangsu Huachuang Microsystem Company Limited, Nanjing 211800, China
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(3), 464; https://doi.org/10.3390/electronics14030464
Submission received: 11 December 2024 / Revised: 13 January 2025 / Accepted: 22 January 2025 / Published: 23 January 2025

Abstract

Due to the ability to perform computations directly on encrypted data, homomorphic encryption (HE) has recently become an important branch of privacy-preserving machine learning (PPML) implementation. Nevertheless, existing implementations of HE-based convolutional neural network (HCNN) applications are not satisfactory in inference latency and area efficiency compared to the unencrypted version. In this work, we first improve the additive powers-of-two (APoT) quantization method for HCNN to achieve a better tradeoff between the complexity of modular multiplication and the network accuracy. An efficient multiplicationless modular multiplier–accumulator (M-MAC) unit is accordingly designed. Furthermore, a batch-processing HCNN accelerator with M-MACs is implemented, in which we propose an advanced data partition scheme to avoid multiple moves of the large-size ciphertext polynomials. Compared to the latest FPGA design, our accelerator can achieve 11× resource reduction of an M-MAC and 2.36× speedup in inference latency for a widely used CNN-11 network to process 8K images. The speedup of our design is also significant compared to the latest CPU and GPU implementations of the batch-processing HCNN models.
Keywords: homomorphic encryption; convolutional neural network; modular multiplication; hardware acceleration homomorphic encryption; convolutional neural network; modular multiplication; hardware acceleration

Share and Cite

MDPI and ACS Style

Chen, K.; Wang, X.; Fu, Y.; Li, L. Efficient Quantization and Data Access for Accelerating Homomorphic Encrypted CNNs. Electronics 2025, 14, 464. https://doi.org/10.3390/electronics14030464

AMA Style

Chen K, Wang X, Fu Y, Li L. Efficient Quantization and Data Access for Accelerating Homomorphic Encrypted CNNs. Electronics. 2025; 14(3):464. https://doi.org/10.3390/electronics14030464

Chicago/Turabian Style

Chen, Kai, Xinyu Wang, Yuxiang Fu, and Li Li. 2025. "Efficient Quantization and Data Access for Accelerating Homomorphic Encrypted CNNs" Electronics 14, no. 3: 464. https://doi.org/10.3390/electronics14030464

APA Style

Chen, K., Wang, X., Fu, Y., & Li, L. (2025). Efficient Quantization and Data Access for Accelerating Homomorphic Encrypted CNNs. Electronics, 14(3), 464. https://doi.org/10.3390/electronics14030464

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop