Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks
Abstract
:1. Introduction
- We extend our preliminary study in [17] to investigate the impact of model quantization on machine learning privacy. We demonstrate a 7 to 9 point accuracy drop in the precision of MIA attacks on quantized models compared to their corresponding full precision models.
- We propose a novel quantization algorithm where the primary goal is to enhance the resistance to MIA while also boosting efficiency.
- In our preliminary study [17], we tested the impact of quantization by using a threshold to perform MIA. In this paper, we comprehensively evaluate the proposed algorithm with a stronger form of MIA attack and training shadow models. We demonstrate that our algorithm can improve the resistance of the model to MIA in comparison to the full precision model.
2. Background and Related Work
2.1. Background
2.2. Related Work
2.2.1. Model Quantization
Reference | Attack Knowledge | Corresponding Attack | Defense Mechanism | |
---|---|---|---|---|
1 | [37] | Black-box | Shadow training | Differential privacy |
2 | [38] | Black-box and White-box | Classifier based and Prediction loss | Distillation |
3 | [39] | Black-box | Classifier based and Prediction correctness | Prediction purification |
4 | [40] | Black-box | Shadow training | Regularization |
5 | [41] | Black-box | Shadow training | Regularization |
6 | [42] | Black-box | Classifier based | MemGuard |
2.2.2. Defense against MIA
3. Proposed Defense to MIA
3.1. Threat Model
- Access the target model: We assume the adversary could only access the target model output. This is referred to as black-box access [41].
- Access to the data: Although the adversary does not have access to the training data, we assume the adversary can sample from the available pool of data that has the same distribution as the training data.
3.2. MIA Algorithm
3.3. Proposed Quantization Scheme
Algorithm 1 The training procedure of the proposed quantization scheme. |
Require: Original DNN parameterized f(x;W), b, s, z, L |
Set epochs T for training |
for T do |
for i in L do |
▹ After backpropagation weights for each layer are quantized |
end for |
end for |
4. Experimental Results
4.1. Experimental Settings
4.1.1. Datasets
4.1.2. Model Architectures
4.1.3. MIA Algorithms
4.1.4. Baseline Quantization Method
4.2. Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Li, F.F. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef]
- Ofer, D.; Brandes, N.; Linial, M. The language of proteins: NLP, machine learning & protein sequences. Comput. Struct. Biotechnol. J. 2021, 19, 1750–1758. [Google Scholar] [PubMed]
- Wu, J.; Leng, C.; Wang, Y.; Hu, Q.; Cheng, J. Quantized convolutional neural networks for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Los Alamitos, CA, USA, 27–30 June 2016; pp. 4820–4828. [Google Scholar]
- Zhou, S.; Ni, Z.; Zhou, X.; Wen, H.; Wu, Y.; Zou, Y. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. arXiv 2016, arXiv:1606.06160. [Google Scholar]
- Guo, R.; Sun, P.; Lindgren, E.; Geng, Q.; Simcha, D.; Chern, F.; Kumar, S. Accelerating Large-Scale Inference with Anisotropic Vector Quantization. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 November 2020; pp. 3887–3896. [Google Scholar]
- Giger, M.L. Machine Learning in Medical Imaging. J. Am. Coll. Radiol. 2018, 15, 512–520. [Google Scholar] [CrossRef]
- Kocić, J.; Jovičić, N.; Drndarević, V. An end-to-end deep neural network for autonomous driving designed for embedded automotive platforms. Sensors 2019, 19, 2064. [Google Scholar] [CrossRef]
- Prakash, R.M.; Thenmoezhi, N.; Gayathri, M. Face recognition with convolutional neural network and transfer learning. In Proceedings of the International Conference on Smart Systems and Inventive Technology, Tirunelveli, India, 27–29 November 2019; pp. 861–864. [Google Scholar]
- Clements, J.; Lao, Y. DeepHardMark: Towards Watermarking Neural Network Hardware. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), Virtual, 22 February–1 March 2022; Volume 36, pp. 4450–4458. [Google Scholar]
- Zhao, B.; Lao, Y. CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), Virtual, 22 February–1 March 2022; Volume 36, pp. 9162–9170. [Google Scholar]
- Ma, J.; Zhang, J.; Shen, G.; Marshall, A.; Chang, C.H. White-Box Adversarial Attacks on Deep Learning-Based Radio Frequency Fingerprint Identification. arXiv 2023, arXiv:2308.07433. [Google Scholar]
- Song, C.; Fallon, E.; Li, H. Improving adversarial robustness in weight-quantized neural networks. arXiv 2020, arXiv:2012.14965. [Google Scholar]
- Aprilpyone, M.; Kinoshita, Y.; Kiya, H. Adversarial Robustness by One Bit Double Quantization for Visual Classification. IEEE Access 2019, 7, 177932–177943. [Google Scholar] [CrossRef]
- Lin, J.; Gan, C.; Han, S. Defensive quantization: When efficiency meets robustness. arXiv 2019, arXiv:1904.08444. [Google Scholar]
- Pan, X.; Zhang, M.; Yan, Y.; Yang, M. Understanding the Threats of Trojaned Quantized Neural Network in Model Supply Chains. In Proceedings of the Annual Computer Security Applications Conference, New York, NY, USA, 6–10 December 2021; pp. 634–645. [Google Scholar]
- Ma, H.; Qiu, H.; Gao, Y.; Zhang, Z.; Abuadbba, A.; Fu, A.; Al-Sarawi, S.; Abbott, D. Quantization Backdoors to Deep Learning Models. arXiv 2021, arXiv:2108.09187. [Google Scholar]
- Kowalski, C.; Famili, A.; Lao, Y. Towards Model Quantization on the Resilience Against Membership Inference Attacks. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022; pp. 3646–3650. [Google Scholar]
- Chen, M.X.; Lee, B.N.; Bansal, G.; Cao, Y.; Zhang, S.; Lu, J.; Tsay, J.; Wang, Y.; Dai, A.M.; Chen, Z.; et al. Gmail smart compose: Real-time assisted writing. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2287–2295. [Google Scholar]
- Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–24 May 2017; pp. 3–18. [Google Scholar]
- Salem, A.; Zhang, Y.; Humbert, M.; Berrang, P.; Fritz, M.; Backes, M. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv 2018, arXiv:1806.01246. [Google Scholar]
- Liu, Y.; Zhao, Z.; Backes, M.; Zhang, Y. Membership inference attacks by exploiting loss trajectory. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, Los Angeles, CA, USA, 7–11 November 2022; pp. 2085–2098. [Google Scholar]
- Yeom, S.; Giacomelli, I.; Fredrikson, M.; Jha, S. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting. In Proceedings of the 2018 IEEE 31st Computer Security Foundations Symposium (CSF), Oxford, UK, 9–12 July 2018; pp. 268–282. [Google Scholar] [CrossRef]
- Song, L.; Shokri, R.; Mittal, P. Privacy risks of securing machine learning models against adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11–15 November 2019; pp. 241–257. [Google Scholar]
- Liu, Z.; Cheng, K.T.; Huang, D.; Xing, E.P.; Shen, Z. Nonuniform-to-uniform quantization: Towards accurate quantization via generalized straight-through estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 4942–4952. [Google Scholar]
- Courbariaux, M.; Hubara, I.; Soudry, D.; El-Yaniv, R.; Bengio, Y. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv 2016, arXiv:1602.02830. [Google Scholar]
- Liu, B.; Li, F.; Wang, X.; Zhang, B.; Yan, J. Ternary weight networks. In Proceedings of the ICASSP 2023–2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
- Park, E.; Yoo, S.; Vajda, P. Value-aware quantization for training and inference of neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 580–595. [Google Scholar]
- Baskin, C.; Liss, N.; Schwartz, E.; Zheltonozhskii, E.; Giryes, R.; Bronstein, A.M.; Mendelson, A. Uniq: Uniform noise injection for non-uniform quantization of neural networks. ACM Trans. Comput. Syst. 2021, 37, 1–15. [Google Scholar] [CrossRef]
- Esser, S.K.; McKinstry, J.L.; Bablani, D.; Appuswamy, R.; Modha, D.S. Learned step size quantization. arXiv 2019, arXiv:1902.08153. [Google Scholar]
- Yang, J.; Shen, X.; Xing, J.; Tian, X.; Li, H.; Deng, B.; Huang, J.; Hua, X.S. Quantization networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 7308–7316. [Google Scholar]
- Dong, Z.; Yao, Z.; Arfeen, D.; Gholami, A.; Mahoney, M.W.; Keutzer, K. Hawq-v2: Hessian aware trace-weighted quantization of neural networks. Adv. Neural Inf. Process. Syst. 2020, 33, 18518–18529. [Google Scholar]
- Cai, Z.; Vasconcelos, N. Rethinking differentiable search for mixed-precision neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 2349–2358. [Google Scholar]
- Wu, B.; Wang, Y.; Zhang, P.; Tian, Y.; Vajda, P.; Keutzer, K. Mixed precision quantization of convnets via differentiable neural architecture search. arXiv 2018, arXiv:1812.00090. [Google Scholar]
- Liu, Z.; Wang, Y.; Han, K.; Ma, S.; Gao, W. Instance-aware dynamic neural network quantization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 12434–12443. [Google Scholar]
- So, J.; Lee, J.; Ahn, D.; Kim, H.; Park, E. Temporal Dynamic Quantization for Diffusion Models. arXiv 2023, arXiv:2306.02316. [Google Scholar]
- Song, Z.; Fu, B.; Wu, F.; Jiang, Z.; Jiang, L.; Jing, N.; Liang, X. DRQ: Dynamic Region-based Quantization for Deep Neural Network Acceleration. In Proceedings of the 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), Valencia, Spain, 30 May–3 June 2020; pp. 1010–1021. [Google Scholar] [CrossRef]
- Chen, Q.; Xiang, C.; Xue, M.; Li, B.; Borisov, N.; Kaarfar, D.; Zhu, H. Differentially private data generative models. arXiv 2018, arXiv:1812.02274. [Google Scholar]
- Shejwalkar, V.; Houmansadr, A. Membership privacy for machine learning models through knowledge transfer. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 May 2021; Volume 35, pp. 9549–9557. [Google Scholar]
- Yang, Z.; Shao, B.; Xuan, B.; Chang, E.C.; Zhang, F. Defending model inversion and membership inference attacks via prediction purification. arXiv 2020, arXiv:2005.03915. [Google Scholar]
- Nasr, M.; Shokri, R.; Houmansadr, A. Machine learning with membership privacy using adversarial regularization. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 2018; pp. 634–646. [Google Scholar]
- Li, J.; Li, N.; Ribeiro, B. Membership inference attacks and defenses in classification models. In Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy, Virtual, 26–28 April 2021; pp. 5–16. [Google Scholar]
- Jia, J.; Salem, A.; Backes, M.; Zhang, Y.; Gong, N.Z. Memguard: Defending against black-box membership inference attacks via adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11–15 November 2019; pp. 259–274. [Google Scholar]
- Iyengar, R.; Near, J.P.; Song, D.; Thakkar, O.; Thakurta, A.; Wang, L. Towards practical differentially private convex optimization. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–22 May 2019; pp. 299–316. [Google Scholar]
- Dwork, C.; McSherry, F.; Nissim, K.; Smith, A. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Theory of Cryptography: Third Theory of Cryptography Conference, New York, NY, USA, 4–7 March 2006; pp. 265–284. [Google Scholar]
- Yuan, X.; Zhang, L. Membership inference attacks and defenses in neural network pruning. In Proceedings of the 31st USENIX Security Symposium (USENIX Security 22), Boston, MA, USA, 10–12 August 2022; pp. 4561–4578. [Google Scholar]
- Song, L.; Mittal, P. Systematic evaluation of privacy risks of machine learning models. In Proceedings of the 30th USENIX Security Symposium (USENIX Security 21), Virtual, 11–13 August 2021; pp. 2615–2632. [Google Scholar]
- Choquette-Choo, C.A.; Tramer, F.; Carlini, N.; Papernot, N. Label-only membership inference attacks. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 1964–1974. [Google Scholar]
- Watson, L.; Guo, C.; Cormode, G.; Sablayrolles, A. On the importance of difficulty calibration in membership inference attacks. arXiv 2021, arXiv:2111.08440. [Google Scholar]
- Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
- Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Carlini, N.; Chien, S.; Nasr, M.; Song, S.; Terzis, A.; Tramer, F. Membership inference attacks from first principles. In Proceedings of the 2022 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 23–25 May 2022; pp. 1897–1914. [Google Scholar]
Symbol | Definition |
---|---|
Shadow model | |
Target model | |
Attack model (binary classifier) | |
Shadow model dataset | |
Target training dataset |
Model | Bitwidth | Shadow Model Accuracy | Attack Model Accuracy |
---|---|---|---|
LeNet | 4 | 82.39% | 50.07% |
8 | 83.20% | 50.21% | |
16 | 83.02% | 50.20% | |
full | 88.26% | 53.40% | |
ResNet-20 | 4 | 51.22% | 69.50% |
8 | 51.38% | 72.50% | |
16 | 70.62% | 66.87% | |
full | 60.58% | 72.30% | |
ResNet-50 | 4 | 58.90% | 64.10% |
8 | 60.38% | 59.38% | |
16 | 67.70% | 56.89% | |
full | 54.01% | 71.10% |
Model | Bitwidth | Class | Precision | Recall | F1-Score |
---|---|---|---|---|---|
LeNet | 4 | Non-Member | 0.51 | 0.06 | 0.11 |
Member | 0.50 | 0.94 | 0.65 | ||
8 | Non-Member | 0.51 | 0.16 | 0.24 | |
Member | 0.50 | 0.85 | 0.63 | ||
16 | Non-Member | 0.50 | 0.23 | 0.31 | |
Member | 0.50 | 0.78 | 0.61 | ||
full | Non-Member | 0.64 | 0.23 | 0.34 | |
Member | 0.53 | 0.87 | 0.66 | ||
ResNet-20 | 4 | Non-Member | 0.52 | 0.82 | 0.64 |
Member | 0.58 | 0.26 | 0.36 | ||
8 | Non-Member | 0.57 | 0.73 | 0.64 | |
Member | 0.62 | 0.44 | 0.52 | ||
16 | Non-Member | 0.76 | 0.49 | 0.60 | |
Member | 0.62 | 0.85 | 0.72 | ||
full | Non-Member | 1.00 | 0.35 | 0.52 | |
Member | 0.61 | 1.00 | 0.75 | ||
ResNet-50 | 4 | Non-Member | 0.59 | 0.50 | 0.54 |
Member | 0.57 | 0.65 | 0.61 | ||
8 | Non-Member | 0.65 | 0.41 | 0.50 | |
Member | 0.57 | 0.78 | 0.66 | ||
16 | Non-Member | 0.56 | 0.60 | 0.58 | |
Member | 0.57 | 0.54 | 0.55 | ||
full | Non-Member | 0.95 | 0.37 | 0.54 | |
Member | 0.61 | 0.98 | 0.75 |
Model | Bitwidth | Attack Accuracy | TN | FP | FN | TP |
---|---|---|---|---|---|---|
LeNet | 4 | 50.07% | 03.24% | 46.76% | 3.17% | 46.82% |
8 | 50.21% | 07.79% | 42.21% | 7.57% | 42.42% | |
16 | 50.20% | 11.29% | 38.70% | 11.09% | 38.90% | |
full | 54.89% | 11.50% | 38.50% | 6.61% | 43.39% | |
ResNet-20 | 4 | 53.59% | 40.76% | 9.23% | 37.18% | 12.82% |
8 | 65.88% | 36.30% | 13.69% | 27.86% | 22.13% | |
16 | 66.84% | 23.50% | 26.49% | 06.66% | 43.33% | |
full | 67.54% | 17.56% | 32.43% | 0.02% | 49.97% | |
ResNet-50 | 4 | 57.66% | 25.09% | 24.90% | 17.43% | 32.56% |
8 | 59.38% | 20.47% | 29.52% | 11.09% | 38.90% | |
16 | 56.89% | 30.13% | 19.86% | 23.24% | 26.75% | |
full | 67.69% | 18.68% | 31.32% | 1.02% | 49.00% |
Method | Bitwidth | F1-Score | Precision | Recall |
---|---|---|---|---|
DoReFa-Net | 4 | 70.12 | 54.00 | 100.00 |
16 | 71.79 | 56.00 | 100.00 | |
Proposed | 4 | 50.00 | 55.00 | 64.00 |
8 | 58.00 | 59.05 | 59.50 | |
16 | 66.00 | 88.05 | 67.00 | |
full | 77.30 | 63.00 | 100.00 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Famili, A.; Lao, Y. Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks. Sensors 2023, 23, 7722. https://doi.org/10.3390/s23187722
Famili A, Lao Y. Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks. Sensors. 2023; 23(18):7722. https://doi.org/10.3390/s23187722
Chicago/Turabian StyleFamili, Azadeh, and Yingjie Lao. 2023. "Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks" Sensors 23, no. 18: 7722. https://doi.org/10.3390/s23187722
APA StyleFamili, A., & Lao, Y. (2023). Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks. Sensors, 23(18), 7722. https://doi.org/10.3390/s23187722