PLDP-FL: Federated Learning with Personalized Local Differential Privacy
Abstract
:1. Introduction
- Personalized perturbation mechanism. We provide a novel personalized local differential privacy data perturbation mechanism. By adjusting the privacy budget parameters and introducing distinct security range parameters for each client, this perturbation mechanism enables clients to articulate their different privacy requirements and obtain various privacy protection.
- Privacy-preserving framework. We propose a privacy-preserving federated learning method to address the demand for personalized local differential privacy. By employing the personalized differential privacy perturbation algorithm to process the intermediate parameters, the server cannot deduce the participant’s privacy information from the intermediate parameters, thereby achieving the personalized privacy protection requirements in the federated learning process.
- High-quality model. Extensive experimental results show that our scheme can obtain a high-quality model when the clients have privacy-preserving requirements. Thus, it is more suitable for applications with device heterogeneity in the real world.
2. Related Work
3. Preliminaries
3.1. Local Differential Privacy
3.2. Personalized Local Differential Privacy
3.3. Random Response
3.3.1. Random Response
3.3.2. Generalized Random Response
3.4. Federated Deep Learning
4. System Overview
4.1. Problem Definition
4.2. System Model
4.3. Threat Model
- The adversary can eavesdrop on intermediate parameters in the interaction of each entity via the public channel, thus inferring the client’s private information from them.
- The adversary can compromise the parameter server, and exploit the server’s intermediate parameters or aggregation results to deduce the private information of the local participant.
- The adversary can corrupt one or more local participants, and use their information to infer the privacy details of other local participants. It is not permissible to corrupt all local participants simultaneously.
4.4. Design Goal
- Data privacy: Our scheme should guarantee that adversaries cannot access local participants’ private data, either directly or by using the intermediate parameters and aggregated results.
- Local participants’ personalized privacy functionality Considering the varying sensitivities of each participant’s private data and their privacy preferences. Our scheme should ensure that all participants can adjust their privacy parameters to satisfy their personalized privacy protection requirements.
- Model’s accuracy Our scheme should ensure that the model converges in theory, while also ensuring its practical feasibility, i.e., the privacy-protected and non-privacy-protected federated learning should be able to train models that are almost the same.
5. Proposed Scheme
5.1. Steps of Implementation
Algorithm 1 PLDP-FL |
Require: N is the number of local clients, is the local client selection factor, , E is the number of local epochs, B is the local mini-batch size, is the learning rate, is the loss function. Ensure: The trained model W.
|
5.1.1. Server Update
5.1.2. Client Update
5.2. Personalized Perturbation Algorithm
Algorithm 2 PDPM |
Require: The personalized privacy parameters , the range length of is , the midpoint of is , and the client’s input value . Ensure: The value after perturbation .
|
5.2.1. Set Privacy Parameter
5.2.2. Compute Perturbation Value
6. Performance Evaluation
6.1. Experimental Setup
6.1.1. Datasets
6.1.2. Metrics
6.1.3. Privacy Parameters
6.1.4. Environment
6.2. Experimental Results
6.2.1. Impact of Privacy Parameters on Synthetic Dataset
6.2.2. Impact of privacy parameters on MNIST and Fashion-MNIST
6.2.3. Impact of Different Privacy Parameters Combinations on MNIST and Fashion-MNIST
6.2.4. Comparison of Scheme PLDP-FL against the Existing Approaches
6.3. Functionality Comparison
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
FL | federated learning |
LDP | local differential privacy |
PLDP | personalized local differential privacy |
PDPM | perturbation algorithm that satisfies personalized local differential privacy |
AI | artificial intelligence |
MI | machine learning |
SMC | secure multi-party computation |
HE | homomorphic encryption |
DP | Differential privacy |
GDP | global differential privacy |
FedAvg | Federated Average Algorithm |
SGD | stochastic gradient descent algorithm |
PLDP-FL | Federated Learning with Personalized Local Differential Privacy |
References
- Yang, Q.; Liu, Y.; Cheng, Y.; Kang, Y.; Chen, T.; Yu, H. Federated learning. Synth. Lect. Artif. Intell. Mach. Learn. 2019, 13, 1–207. [Google Scholar]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
- McMahan, H.B.; Moore, E.; Ramage, D.; y Arcas, B.A. Federated Learning of Deep Networks using Model Averaging. arXiv 2016, arXiv:1602.05629. [Google Scholar]
- Phong, L.T.; Aono, Y.; Hayashi, T.; Wang, L.; Moriai, S. Privacy-preserving deep learning: Revisited and enhanced. In Proceedings of the International Conference on Applications and Techniques in Information Security, Auckland, New Zealand, 6–7 July 2017; pp. 100–110. [Google Scholar]
- Nasr, M.; Shokri, R.; Houmansadr, A. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 19–23 May 2019; pp. 739–753. [Google Scholar] [CrossRef]
- Fredrikson, M.; Jha, S.; Ristenpart, T. Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, 12–16 October 2015; pp. 1322–1333. [Google Scholar]
- Tramèr, F.; Zhang, F.; Juels, A.; Reiter, M.K.; Ristenpart, T. Stealing Machine Learning Models via Prediction APIs. In Proceedings of the 25th USENIX Security Symposium (USENIX Security 16), Austin, TX, USA, 10–12 August 2016; pp. 601–618. [Google Scholar]
- Truex, S.; Baracaldo, N.; Anwar, A.; Steinke, T.; Ludwig, H.; Zhang, R.; Zhou, Y. A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, London, UK, 15 November 2019; pp. 1–11. [Google Scholar]
- Xu, G.; Li, H.; Zhang, Y.; Xu, S.; Ning, J.; Deng, R. Privacy-preserving federated deep learning with irregular users. IEEE Trans. Dependable Secur. Comput. 2022, 19, 1364–1381. [Google Scholar] [CrossRef]
- Dwork, C.; Roth, A. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 2014, 9, 211–407. [Google Scholar] [CrossRef]
- Geyer, R.C.; Klein, T.; Nabi, M. Differentially Private Federated Learning: A Client Level Perspective. arXiv 2017, arXiv:1712.07557. [Google Scholar]
- Truex, S.; Liu, L.; Chow, K.H.; Gursoy, M.E.; Wei, W. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, Heraklion, Greece, 27 April 2020; pp. 61–66. [Google Scholar]
- Zhao, Y.; Zhao, J.; Yang, M.; Wang, T.; Wang, N.; Lyu, L.; Niyato, D.; Lam, K.Y. Local differential privacy-based federated learning for internet of things. IEEE Internet Things J. 2020, 8, 8836–8853. [Google Scholar] [CrossRef]
- Sun, L.; Qian, J.; Chen, X. LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), Montreal, QC, Canada, 19–27 August 2021; pp. 1571–1578. [Google Scholar] [CrossRef]
- Phong, L.T.; Aono, Y.; Hayashi, T.; Wang, L.; Moriai, S. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption. IEEE Trans. Inf. Forensics Secur. 2018, 13, 1333–1345. [Google Scholar] [CrossRef]
- Xu, R.; Baracaldo, N.; Zhou, Y.; Anwar, A.; Ludwig, H. Hybridalpha: An efficient approach for privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, London, UK, 15 November 2019; pp. 13–23. [Google Scholar]
- Chen, Y.; Wang, B.; Zhang, Z. PDLHR: Privacy-Preserving Deep Learning Model with Homomorphic Re-Encryption in Robot System. IEEE Syst. J. 2022, 16, 2032–2043. [Google Scholar] [CrossRef]
- Shokri, R.; Shmatikov, V. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, 12–16 October 2015; pp. 1310–1321. [Google Scholar]
- Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 308–318. [Google Scholar]
- Hu, R.; Guo, Y.; Li, H.; Pei, Q.; Gong, Y. Personalized federated learning with differential privacy. IEEE Internet Things J. 2020, 7, 9530–9539. [Google Scholar] [CrossRef]
- Wei, K.; Li, J.; Ding, M.; Ma, C.; Yang, H.H.; Farokhi, F.; Jin, S.; Quek, T.Q.; Poor, H.V. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3454–3469. [Google Scholar] [CrossRef] [Green Version]
- Zhao, L.; Wang, Q.; Zou, Q.; Zhang, Y.; Chen, Y. Privacy-preserving collaborative deep learning with unreliable participants. IEEE Trans. Inf. Forensics Secur. 2019, 15, 1486–1500. [Google Scholar] [CrossRef] [Green Version]
- Phan, N.; Wang, Y.; Wu, X.; Dou, D. Differential privacy preservation for deep auto-encoders: An application of human behavior prediction. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
- Duchi, J.C.; Jordan, M.I.; Wainwright, M.J. Minimax optimal procedures for locally private estimation. J. Am. Stat. Assoc. 2018, 113, 182–201. [Google Scholar] [CrossRef]
- Liu, R.; Cao, Y.; Yoshikawa, M.; Chen, H. Fedsel: Federated sgd under local differential privacy with top-k dimension selection. In Proceedings of the International Conference on Database Systems for Advanced Applications, Jeju, Republic of Korea, 24–27 September 2020; pp. 485–501. [Google Scholar]
- Sun, L.; Lyu, L. Federated Model Distillation with Noise-Free Differential Privacy. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), Montreal, QC, Canada, 19–27 August 2021; pp. 1563–1570. [Google Scholar] [CrossRef]
- Arachchige, P.C.M.; Bertok, P.; Khalil, I.; Liu, D.; Camtepe, S.; Atiquzzaman, M. Local differential privacy for deep learning. IEEE Internet Things J. 2019, 7, 5827–5842. [Google Scholar] [CrossRef] [Green Version]
- Damgård, I.; Jurik, M. A Generalisation, a Simplification and Some Applications of Paillier’s Probabilistic Public-Key System. In Proceedings of the Public Key Cryptography: 4th International Workshop on Practice and Theory in Public Key Cryptosystems, Cheju Island, Republic of Korea, 13–15 February 2001; pp. 119–136. [Google Scholar]
- Nie, Y.; Yang, W.; Huang, L.; Xie, X.; Zhao, Z.; Wang, S. A Utility-Optimized Framework for Personalized Private Histogram Estimation. IEEE Trans. Knowl. Data Eng. 2019, 31, 655–669. [Google Scholar] [CrossRef]
- Shen, Z.; Xia, Z.; Yu, P. PLDP: Personalized Local Differential Privacy for Multidimensional Data Aggregation. Secur. Commun. Netw. 2021, 2021, 6684179. [Google Scholar] [CrossRef]
- Xue, Q.; Zhu, Y.; Wang, J. Mean estimation over numeric data with personalized local differential privacy. Front. Comput. Sci. 2022, 16, 163806. [Google Scholar] [CrossRef]
- Akter, M.; Hashem, T. Computing Aggregates Over Numeric Data with Personalized Local Differential Privacy. In Proceedings of the Information Security and Privacy—22nd Australasian Conference (ACISP 2017), Auckland, New Zealand, 3–5 July 2017; Volume 10343, pp. 249–260. [Google Scholar] [CrossRef]
- Li, X.; Yan, H.; Cheng, Z.; Sun, W.; Li, H. Protecting Regression Models with Personalized Local Differential Privacy. IEEE Trans. Dependable Secur. Comput. 2022. [Google Scholar] [CrossRef]
- Yang, G.; Wang, S.; Wang, H. Federated Learning with Personalized Local Differential Privacy. In Proceedings of the 6th IEEE International Conference on Computer and Communication Systems (ICCCS 2021), Chengdu, China, 23–26 April 2021; pp. 484–489. [Google Scholar] [CrossRef]
- Dwork, C.; McSherry, F.; Nissim, K.; Smith, A. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Theory of Cryptography Conference, New York, NY, USA, 4–7 March 2006; pp. 265–284. [Google Scholar]
- Kasiviswanathan, S.P.; Lee, H.K.; Nissim, K.; Raskhodnikova, S.; Smith, A.D. What Can We Learn Privately? SIAM J. Comput. 2011, 40, 793–826. [Google Scholar] [CrossRef]
- Chen, R.; Li, H.; Qin, A.K.; Kasiviswanathan, S.P.; Jin, H. Private spatial data aggregation in the local setting. In Proceedings of the 32nd IEEE International Conference on Data Engineering (ICDE 2016), Helsinki, Finland, 16–20 May 2016. [Google Scholar] [CrossRef]
- Warner, S.L. Randomized response: A survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 1965, 60, 63–69. [Google Scholar] [CrossRef] [PubMed]
- Kairouz, P.; Bonawitz, K.; Ramage, D. Discrete distribution estimation under local privacy. In Proceedings of the International Conference on Machine Learning, New York City, NY, USA, 19–24 June 2016; pp. 2436–2444. [Google Scholar]
A randomized algorithm for DP | |
Any pair of input values | |
The perturbed value of the input value v | |
The privacy parameters of client | |
The length and midpoint of the safe range, respectively | |
The i-th client | |
The dataset held by client | |
The input data and true label in the dataset , respectively | |
N | The number of all clients |
The number of chosen clients | |
The local client selection factor | |
T | The number of aggregation rounds |
Loss function | |
The optimal model parameters that minimize | |
Local parameters of the i-th client in the t-th round | |
Local parameters after perturbation of the i-th client in the t-th round | |
Global parameters after aggregation |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shen, X.; Jiang, H.; Chen, Y.; Wang, B.; Gao, L. PLDP-FL: Federated Learning with Personalized Local Differential Privacy. Entropy 2023, 25, 485. https://doi.org/10.3390/e25030485
Shen X, Jiang H, Chen Y, Wang B, Gao L. PLDP-FL: Federated Learning with Personalized Local Differential Privacy. Entropy. 2023; 25(3):485. https://doi.org/10.3390/e25030485
Chicago/Turabian StyleShen, Xiaoying, Hang Jiang, Yange Chen, Baocang Wang, and Le Gao. 2023. "PLDP-FL: Federated Learning with Personalized Local Differential Privacy" Entropy 25, no. 3: 485. https://doi.org/10.3390/e25030485
APA StyleShen, X., Jiang, H., Chen, Y., Wang, B., & Gao, L. (2023). PLDP-FL: Federated Learning with Personalized Local Differential Privacy. Entropy, 25(3), 485. https://doi.org/10.3390/e25030485