pFedBASC: Personalized Federated Learning with Blockchain-Assisted Semi-Centralized Framework
Abstract
:1. Introduction
- We propose a blockchain-assisted semi-centralized pFL framework called pFedBASC, which provides a reliable collaborative training environment for distributed data in IoT and offers guidance for designing BFL algorithms.
- We design the block structure of the blockchain for pFedBASC. We model and formally describe the semi-centralized framework. Building upon this, we design an FL aggregation method, utilizing loss functions and delayed rounds for weight adjustment.
- We propose a pFL algorithm called FedHype, which takes advantage of the hypernetwork’s characteristics. FedHype significantly enhances the overall pFL performance and meets the personalized training needs of different users. We also integrate FedHype with other existing algorithms, further extending its functionality.
2. Related Work
2.1. Blockchain-Based Federated Learning
2.2. Personalized Federated Learning
3. Preliminaries
3.1. Semi-Centralized Federated Learning Framework
3.2. Hypernetwork
3.3. Problem Definition
4. Methodology
4.1. pFedBASC Framework Overview
4.2. Block Structure Design
- Header information: The block header primarily records information necessary for blockchain’s historical records and traceability. This section consists of three standardized pieces of information: the block ID, which provides a unique identifier for the current block, allowing other blocks to distinguish and reference it; the timestamp, which records the time when the block was added to the chain, facilitating subsequent tracebacks and timeline construction for any anomalies; and the block type, which indicates whether the block is a download, evaluation, or upload block, aiding in the subsequent processing of block-specific information. Additionally, the client ID records the identification information of the client involved in the operation, facilitating the traceability of the actor.
- Model information: This part contains the information and attributes necessary for the FL training process. It is strongly related to the block type, and detailed descriptions will follow based on the type of block involved.
- Validation information: The block includes data crucial for verifying the accuracy of the content within the blockchain. By recording the previous hash of the parent block and the current block’s hash, the blockchain’s tamper-proof nature is ensured, preventing attackers from compromising the integrity of the blockchain by altering a single block.
- Download Block: Blocks record the model parameters that a client downloads from the blockchain system. This enables the system to update the contribution score of the model’s uploader or provide rewards based on intellectual property rights. Moreover, suppose a client’s local model encounters security issues or errors. In that case, the download records can be traced to determine if the problem originated from a model stored in the blockchain system, thus ensuring traceability. In the header information of a download block, the client ID records which client performed the download, facilitating traceability. The model information section records which uploaded block’s model parameters were downloaded, specifically noting the block ID where the model parameters reside. Recording the block ID instead of the model parameters effectively reduces the storage space required for this block.
- Evaluation Block: Blocks are tasked with recording a client’s evaluation of model parameters within the blockchain system. Primarily, they log the loss function value derived from model inference on this client, allowing the system to track and record the quality and reliability of a particular model’s parameters. This tracking can inform incentives for the clients who uploaded the high-quality model parameters, thus encouraging high-quality training. In the header information, the evaluation block’s client ID identifies which client performed the evaluation, enhancing traceability. The model information section records which uploaded block’s model parameters were evaluated, including the block ID and the corresponding scores. This paper primarily designs and discusses evaluation blocks, without delving into their further utilization.
- Upload Block: Blocks are designated for storing new rounds of locally aggregated model parameters from a client. They are crucial for providing other clients with reliable model parameters for aggregation, aimed at enhancing the accuracy and generalizability of local models across clients. In the header information, the upload block’s client ID indicates which client uploaded the model parameters, ensuring traceability. The model information section records all the model parameters uploaded during this operation.
4.3. pFedBASC Aggregation Method
4.4. Proposed FedHype
Algorithm 1: FedHype |
|
4.5. pFedBASC Workflow
Algorithm 2: pFedBASC |
|
5. Experiments
5.1. Experiment Settings
5.2. Impact of System Heterogeneity
5.3. Performance Evaluation of pFedBASC
5.4. Robustness of pFedBASC
5.5. FedHype Scalability
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, Fort Lauderdale, FL, USA, 20–22 April 2017; Volume 54, pp. 1273–1282. Available online: http://proceedings.mlr.press/v54/mcmahan17a.html (accessed on 10 September 2023).
- Bonawitz, K.A.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical Secure Aggregation for Privacy-Preserving Machine Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, 30 October–3 November 2017; pp. 1175–1191. [Google Scholar] [CrossRef]
- Guo, S.; Zhang, K.; Gong, B.; Chen, L.; Ren, Y.; Qi, F.; Qiu, X. Sandbox Computing: A Data Privacy Trusted Sharing Paradigm Via Blockchain and Federated Learning. IEEE Trans. Comput. 2023, 72, 800–810. [Google Scholar] [CrossRef]
- Lu, Y.; Huang, X.; Zhang, K.; Maharjan, S.; Zhang, Y. Blockchain and Federated Learning for 5G Beyond. IEEE Netw. 2021, 35, 219–225. [Google Scholar] [CrossRef]
- Feng, L.; Zhao, Y.; Guo, S.; Qiu, X.; Li, W.; Yu, P. BAFL: A Blockchain-Based Asynchronous Federated Learning Framework. IEEE Trans. Computers 2022, 71, 1092–1103. [Google Scholar] [CrossRef]
- Gao, L.; Li, L.; Chen, Y.; Xu, C.; Xu, M. FGFL: A blockchain-based fair incentive governor for Federated Learning. J. Parallel Distributed Comput. 2022, 163, 283–299. [Google Scholar] [CrossRef]
- Nguyen, D.C.; Hosseinalipour, S.; Love, D.J.; Pathirana, P.N.; Brinton, C.G. Latency Optimization for Blockchain-Empowered Federated Learning in Multi-Server Edge Computing. IEEE J. Sel. Areas Commun. 2022, 40, 3373–3390. [Google Scholar] [CrossRef]
- Qu, Y.; Gao, L.; Xiang, Y.; Shen, S.; Yu, S. FedTwin: Blockchain-Enabled Adaptive Asynchronous Federated Learning for Digital Twin Networks. IEEE Netw. 2022, 36, 183–190. [Google Scholar] [CrossRef]
- Wang, Y.; Peng, H.; Su, Z.; Luan, T.H.; Benslimane, A.; Wu, Y. A Platform-Free Proof of Federated Learning Consensus Mechanism for Sustainable Blockchains. IEEE J. Sel. Areas Commun. 2022, 40, 3305–3324. [Google Scholar] [CrossRef]
- Wang, W.; Wang, Y.; Huang, Y.; Mu, C.; Sun, Z.; Tong, X.; Cai, Z. Privacy protection federated learning system based on blockchain and edge computing in mobile crowdsourcing. Comput. Netw. 2022, 215, 109206. [Google Scholar] [CrossRef]
- Rückel, T.; Sedlmeir, J.; Hofmann, P. Fairness, integrity, and privacy in a scalable blockchain-based federated learning system. Comput. Netw. 2022, 202, 108621. [Google Scholar] [CrossRef]
- Cui, L.; Su, X.; Zhou, Y. A Fast Blockchain-Based Federated Learning Framework With Compressed Communications. IEEE J. Sel. Areas Commun. 2022, 40, 3358–3372. [Google Scholar] [CrossRef]
- Li, Z.; Yu, H.; Zhou, T.; Luo, L.; Fan, M.; Xu, Z.; Sun, G. Byzantine Resistant Secure Blockchained Federated Learning at the Edge. IEEE Netw. 2021, 35, 295–301. [Google Scholar] [CrossRef]
- Pokhrel, S.R.; Choi, J. Federated Learning With Blockchain for Autonomous Vehicles: Analysis and Design Challenges. IEEE Trans. Commun. 2020, 68, 4734–4746. [Google Scholar] [CrossRef]
- Li, Y.; Chen, C.; Liu, N.; Huang, H.; Zheng, Z.; Yan, Q. A Blockchain-Based Decentralized Federated Learning Framework with Committee Consensus. IEEE Netw. 2021, 35, 234–241. [Google Scholar] [CrossRef]
- Feng, L.; Yang, Z.; Guo, S.; Qiu, X.; Li, W.; Yu, P. Two-Layered Blockchain Architecture for Federated Learning Over the Mobile Edge Network. IEEE Netw. 2022, 36, 45–51. [Google Scholar] [CrossRef]
- Tan, A.Z.; Yu, H.; Cui, L.; Yang, Q. Towards Personalized Federated Learning. IEEE Trans. Neural Networks Learn. Syst. 2023, 34, 9587–9603. [Google Scholar] [CrossRef]
- Mansour, Y.; Mohri, M.; Ro, J.; Suresh, A.T. Three Approaches for Personalization with Applications to Federated Learning. arXiv 2020, arXiv:2002.10619. [Google Scholar]
- Sun, J.; Chen, T.; Giannakis, G.B.; Yang, Z. Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients. In Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019; pp. 3365–3375. Available online: https://proceedings.neurips.cc/paper/2019/hash/4e87337f366f72daa424dae11df0538c-Abstract.html (accessed on 25 November 2023).
- Fallah, A.; Mokhtari, A.; Ozdaglar, A.E. Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach. In Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, Virtual, 6–12 December 2020; Available online: https://proceedings.neurips.cc/paper/2020/hash/24389bfe4fe2eba8bf9aa9203a44cdad-Abstract.html (accessed on 27 November 2023).
- Dinh, C.T.; Vu, T.T.; Tran, N.H.; Dao, M.N.; Zhang, H. FedU: A Unified Framework for Federated Multi-Task Learning with Laplacian Regularization. arXiv 2021, arXiv:2102.07148. [Google Scholar]
- Smith, V.; Chiang, C.; Sanjabi, M.; Talwalkar, A. Federated Multi-Task Learning. In Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017; pp. 4424–4434. Available online: https://proceedings.neurips.cc/paper/2017/hash/6211080fa89981f66b1a0c9d55c61d0f-Abstract.html (accessed on 27 November 2023).
- Collins, L.; Hassani, H.; Mokhtari, A.; Shakkottai, S. Exploiting Shared Representations for Personalized Federated Learning. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, Virtual Event, 18–24 July 2021; Volume 139, pp. 2089–2099. Available online: http://proceedings.mlr.press/v139/collins21a.html (accessed on 28 November 2023).
- Huang, Y.; Chu, L.; Zhou, Z.; Wang, L.; Liu, J.; Pei, J.; Zhang, Y. Personalized Cross-Silo Federated Learning on Non-IID Data. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event, 2–9 February 2021; pp. 7865–7873. [Google Scholar] [CrossRef]
- Ouyang, X.; Xie, Z.; Zhou, J.; Huang, J.; Xing, G. ClusterFL: A similarity-aware federated learning system for human activity recognition. In Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, Virtual Event, WI, USA, 24 June–2 July 2021; pp. 54–66. [Google Scholar] [CrossRef]
- Li, X.; Jiang, M.; Zhang, X.; Kamp, M.; Dou, Q. FedBN: Federated Learning on Non-IID Features via Local Batch Normalization. In Proceedings of the 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, 3–7 May 2021; OpenReview.net. 2021. Available online: https://openreview.net/forum?id=6YEQUn0QICG (accessed on 15 February 2024).
- Arivazhagan, M.G.; Aggarwal, V.; Singh, A.K.; Choudhary, S. Federated Learning with Personalization Layers. arXiv 2019, arXiv:1912.00818. [Google Scholar]
- Oh, J.; Kim, S.; Yun, S. FedBABU: Towards Enhanced Representation for Federated Image Classification. arXiv 2021, arXiv:2106.06042. [Google Scholar]
- Zhang, M.; Sapra, K.; Fidler, S.; Yeung, S.; Álvarez, J.M. Personalized Federated Learning with First Order Model Optimization. In Proceedings of the 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, 3–7 May 2021; OpenReview.net. 2021. Available online: https://openreview.net/forum?id=ehJqJQk9cw (accessed on 16 February 2024).
- Schmidhuber, J. Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks. Neural Comput. 1992, 4, 131–139. [Google Scholar] [CrossRef]
- Shamsian, A.; Navon, A.; Fetaya, E.; Chechik, G. Personalized Federated Learning using Hypernetworks. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, Virtual Event, 18–24 July 2021; Volume 139, pp. 9489–9502. Available online: http://proceedings.mlr.press/v139/shamsian21a.html (accessed on 15 February 2024).
- Ma, X.; Zhang, J.; Guo, S.; Xu, W. Layer-wised Model Aggregation for Personalized Federated Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, 18–24 June 2022; pp. 10082–10091. [Google Scholar] [CrossRef]
- Chen, X.; Huang, Y.; Xie, Z.; Pang, J. HyperFedNet: Communication-Efficient Personalized Federated Learning Via Hypernetwork. arXiv 2024, arXiv:2402.18445. [Google Scholar] [CrossRef]
- Chen, H.; Chao, W. On Bridging Generic and Personalized Federated Learning for Image Classification. In Proceedings of the Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, 25–29 April 2022; OpenReview.net. 2024. Available online: https://openreview.net/forum?id=I1hQbx10Kxn (accessed on 18 February 2024).
- Ha, D.; Dai, A.M.; Le, Q.V. HyperNetworks. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017; Available online: https://openreview.net/forum?id=rkpACe1lx (accessed on 30 November 2023).
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
- Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images. In Handbook of Systemic Autoimmune Diseases; 2009. Available online: https://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf (accessed on 15 February 2024).
- Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated Optimization in Heterogeneous Networks. In Proceedings of the Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, 2–4 March 2020; mlsys.org. 2020. Available online: https://proceedings.mlsys.org/paper_files/paper/2020/file/1f5fe83998a09396ebe6477d9475ba0c-Paper.pdf (accessed on 15 February 2024).
- Zhu, Z.; Hong, J.; Zhou, J. Data-Free Knowledge Distillation for Heterogeneous Federated Learning. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, Virtual Event, 18–24 July 2021; Volume 139, pp. 12878–12889. Available online: http://proceedings.mlr.press/v139/zhu21b.html (accessed on 15 February 2024).
- Acar, D.A.E.; Zhao, Y.; Navarro, R.M.; Mattina, M.; Whatmough, P.N.; Saligrama, V. Federated Learning Based on Dynamic Regularization. In Proceedings of the 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, 3–7 May 2021; OpenReview.net. 2021. Available online: https://openreview.net/forum?id=B7v4QMR6Z9w (accessed on 15 February 2024).
- Tan, J.; Zhou, Y.; Liu, G.; Wang, J.H.; Yu, S. pFedSim: Similarity-Aware Model Aggregation Towards Personalized Federated Learning. arXiv 2023, arXiv:2305.15706. [Google Scholar] [CrossRef]
Method | System Heterogeneity | Non-System Heterogeneity | Training Time Incr- Ease Ratio | ||
---|---|---|---|---|---|
Accuracy | Device Runtime | Accuracy | Device Runtime | ||
FedAvg | 60.96% | 36.67% | 63.44% | 45.82% | 2.06× |
FedProx | 58.98% | 34.45% | 63.26% | 44.75% | 2.31× |
FedDyn | 59.74% | 35.29% | 61.33% | 47.68% | 2.38× |
FedGen | 57.76% | 36.45% | 60.85% | 49.73% | 2.07× |
FedBN | 60.24% | 32.29% | 62.91% | 42.2% | 2.04× |
FedBabu | 64.01% | 36.06% | 66.79% | 42.59% | 2.02× |
FedPer | 63.54% | 36.66% | 65.76% | 43.9% | 1.93× |
FedRep | 65.6% | 32.66% | 68.19% | 42.64% | 2.72× |
pFedLa | 57.71% | 36.91% | 60.26% | 45.92% | 2.25× |
pFedBASC | 65.73% | 100% | 69.8% | 100% | 1.38× |
Method | MNIST | FMNIST | CIFAR-10 | CIFAR-100 | ||||
---|---|---|---|---|---|---|---|---|
non-iid | iid | non-iid | iid | non-iid | iid | non-iid | iid | |
Central | 90.3% | 85.38% | 76.99% | 49.48% | ||||
FedAvg | 89.43% | 90.02% | 83.84% | 81.56% | 63.44% | 56.24% | 30.72% | 30.62% |
Local | 86.39% | 84.79% | 78.72% | 69.92% | 51.75% | 27.59% | 18.21% | 6.33% |
FedProx | 89.75% | 90.09% | 84.68% | 82.04% | 63.26% | 55.13% | 28.28% | 30.94% |
FedDyn | 89.44% | 89.98% | 84.76% | 82.18% | 61.33% | 56.05% | 26.43% | 31.13% |
FedGen | 88.78% | 89.1% | 83.54% | 80.86% | 60.85% | 56.76% | 23.92% | 14.57% |
FedBN | 90.09% | 90.07% | 82.98% | 81.55% | 62.91% | 30.59% | 32.13% | 29.14% |
FedBabu | 89.9% | 89.25% | 85.21% | 82.59% | 66.79% | 58.52% | 39.74% | 33.3% |
FedPer | 89.55% | 89.62% | 83.86% | 81.61% | 65.76% | 57.04% | 28.3% | 15.53% |
FedRep | 88.75% | 89.11% | 83.45% | 78.8% | 68.19% | 59.57% | 26.1% | 11.2% |
pFedSim | 89.58% | 89.88% | 83.78% | 82.4% | 64.05% | 59.48% | 39.72% | 33.75% |
pFedLa | 89.59% | 89.14% | 83.96% | 80.49% | 60.26% | 51.76% | 26.73% | 23.73% |
pFedBASC | 88.45% | 89.45% | 84.54% | 83.19% | 69.8% | 70.99% | 43.6% | 41.31% |
Method | CIFAR-10 (40%) | CIFAR-100 (10%) |
---|---|---|
FedFomo | 1.82 | 0.52 |
pFedHN | 2.08 | 10.86 |
Fed-RoD | 9.58 | 6.17 |
pFedBASC | 7.97 | 10.62 |
Method | MNIST | FMNIST | CIFAR10 | CIFAR100 | ||||
---|---|---|---|---|---|---|---|---|
non-iid | iid | non-iid | iid | non-iid | iid | non-iid | iid | |
FedHype + FedAvg | −1.22% | −0.9% | 1.18% | 1.13% | 10.75% | 12.33% | 10.53% | 10.94% |
FedHype + FedProx | −1.09% | −0.66% | 0.3% | 0.82% | 6.54% | 13.65% | 12.95% | 10.06% |
FedHype + FedDyn | −0.98% | −0.66% | −0.41% | 0.98% | 7.47% | 13.16% | 14.58% | 9.39% |
FedHype + FedBN | −1.03% | −1.55% | −1.07% | 0.98% | 0.13% | 29.35% | 3.67% | 4.25% |
FedHype + pFedLa | −0.46% | −0.25% | 0.26% | 2.51% | 8.76% | 16.18% | 13.24% | 15.62% |
FedHype + FedFomo | 4.74% | 5.25% | 7.8% | 12.44% | 13.83% | 40.25% | 16.7% | 22.88% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Peng, X.; Xian, H. pFedBASC: Personalized Federated Learning with Blockchain-Assisted Semi-Centralized Framework. Future Internet 2024, 16, 164. https://doi.org/10.3390/fi16050164
Zhang Y, Peng X, Xian H. pFedBASC: Personalized Federated Learning with Blockchain-Assisted Semi-Centralized Framework. Future Internet. 2024; 16(5):164. https://doi.org/10.3390/fi16050164
Chicago/Turabian StyleZhang, Yu, Xiaowei Peng, and Hequn Xian. 2024. "pFedBASC: Personalized Federated Learning with Blockchain-Assisted Semi-Centralized Framework" Future Internet 16, no. 5: 164. https://doi.org/10.3390/fi16050164
APA StyleZhang, Y., Peng, X., & Xian, H. (2024). pFedBASC: Personalized Federated Learning with Blockchain-Assisted Semi-Centralized Framework. Future Internet, 16(5), 164. https://doi.org/10.3390/fi16050164