Adaptive Quantization Mechanism for Federated Learning Models Based on DAG Blockchain
Abstract
:1. Introduction
- (1)
- A k-means-based adaptive model compression scheme is proposed to reduce the communication overhead. Instead of transmitting the original model, the compressed model is transmitted. It not only reduces the communication overhead of the nodes but also improves the security of the system.
- (2)
- A grading-based tips selection algorithm is proposed, which integrates the accuracy of each iteration during FL training and the overall communication overhead of the system, so that the FL model iterates in the direction of high accuracy.
- (3)
- The primary convolutional neural network (CNN) model is trained for the handwriting recognition tasks by using the MNIST dataset. Extensive experiments show that our CDAG-FL scheme saves approximately 16% of the communication overhead compared to the baseline method.
2. Related Work
2.1. Convergence Framework for DAG Blockchain and Federated Learning
2.2. Communication Overhead Issues
3. System Model
3.1. System Framework Architecture
3.2. System Workflow
- Initial Transaction: The initial transaction contains FL tasks and the initial model parameters .
- Transaction: The primary transaction type contains the published model.
- Tip Transaction: A unique transaction that contains the latest model and has not been approved by other nodes.
- New Transaction: The new node contains the model to be published.
3.3. Problem Formulation
4. Optimization Methods
4.1. Adaptive Quantization Model Compression Based on k-Means
4.1.1. Selection of Clustering Centers and Number of Clusters k
4.1.2. Calculation of Quantized Partial Model Bits
4.2. Grading-Based Tips Selection Strategy
Algorithm 1: Grading-based tips selection strategy |
5. Simulation Analysis
5.1. Experiment Setting
5.1.1. Datasets and Models
5.1.2. Training Hyperparameters Setting
5.1.3. Results Analysis
6. Conclusions
7. Discussion
7.1. Selection of the Optimal k Value
7.2. Tip Selection Algorithm with Multi-Factor Evaluation
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Cui, L.; Yang, S.; Chen, Z.; Pan, Y.; Xu, M.; Xu, K. An Efficient and Compacted DAG-Based Blockchain Protocol for Industrial Internet of Things. IEEE Trans. Ind. Inform. 2020, 16, 4134–4145. [Google Scholar] [CrossRef]
- Custers, B.; Sears, A.M.; Dechesne, F.; Georgieva, I.; Tani, T.; Van der Hof, S. EU Personal Data Protection in Policy and Practice; T.M.C. Asser Press: The Hague, The Netherlands, 2019; Volume 29. [Google Scholar]
- Gaff, B.M.; Sussman, H.E.; Geetter, J. Privacy and Big Data. Computer 2014, 47, 7–9. [Google Scholar] [CrossRef]
- Qi, Y.; Hossain, M.S.; Nie, J.; Li, X. Privacy-preserving blockchain-based federated learning for traffic flow prediction. Future Gener. Comput. Syst. 2021, 117, 328–337. [Google Scholar] [CrossRef]
- Lu, Y.; Huang, X.; Dai, Y.; Maharjan, S.; Zhang, Y. Blockchain and Federated Learning for Privacy-Preserved Data Sharing in Industrial IoT. IEEE Trans. Ind. Inform. 2020, 16, 4177–4186. [Google Scholar] [CrossRef]
- Li, Y.; Cao, B.; Peng, M.; Zhang, L.; Zhang, L.; Feng, D.; Yu, J. Direct Acyclic Graph-Based Ledger for Internet of Things: Performance and Security Analysis. IEEE/ACM Trans. Netw. 2020, 28, 1643–1656. [Google Scholar] [CrossRef]
- Beilharz, J.; Pfitzner, B.; Schmid, R.; Geppert, P.; Arnrich, B.; Polze, A. Implicit Model Specialization through DAG-Based Decentralized Federated Learning. In Proceedings of the Middleware’21: 22nd International Middleware Conference, Québec City, QC, Canada, 6–10 December 2021; pp. 310–322. [Google Scholar] [CrossRef]
- Schmid, R.; Pfitzner, B.; Beilharz, J.; Arnrich, B.; Polze, A. Tangle Ledger for Decentralized Learning. In Proceedings of the 2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), New Orleans, LA, USA, 18–22 May 2020; pp. 852–859. [Google Scholar] [CrossRef]
- Cao, M.; Zhang, L.; Cao, B. Toward On-Device Federated Learning: A Direct Acyclic Graph-Based Blockchain Approach. IEEE Trans. Neural Networks Learn. Syst. 2023, 34, 2028–2042. [Google Scholar] [CrossRef] [PubMed]
- Zhu, L.; Liu, Z.; Han, S. Deep Leakage from Gradients. arXiv 2019, arXiv:1906.08935. [Google Scholar]
- Zhao, B.; Mopuri, K.R.; Bilen, H. idlg: Improved Deep Leakage from Gradients. arXiv 2020, arXiv:2001.02610. [Google Scholar]
- Geiping, J.; Bauermeister, H.; Drge, H.; Moeller, M. Inverting Gradients—How Easy Is It to Break Privacy in Federated Learning? arXiv 2020, arXiv:2003.14053. [Google Scholar]
- Yang, W.; Yang, Y.; Dang, X.; Jiang, H.; Zhang, Y.; Xiang, W. A Novel Adaptive Gradient Compression Approach for Communication-Efficient Federated Learning. In Proceedings of the 2021 China Automation Congress (CAC), Beijing, China, 22–24 October 2021; pp. 674–678. [Google Scholar] [CrossRef]
- Luo, P.; Yu, F.R.; Chen, J.; Li, J.; Leung, V.C.M. A Novel Adaptive Gradient Compression Scheme: Reducing the Communication Overhead for Distributed Deep Learning in the Internet of Things. IEEE Internet Things J. 2021, 8, 11476–11486. [Google Scholar] [CrossRef]
- Xue, X.; Mao, H.; Li, Q.; Huang, F.; Abd El-Latif, A.A. An Energy Efficient Specializing DAG Federated Learning Based on Event-Triggered Communication. Mathematics 2022, 10, 4388. [Google Scholar] [CrossRef]
- Chen, M.; Yang, Z.; Saad, W.; Yin, C.; Poor, H.V.; Cui, S. A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks. IEEE Trans. Wirel. Commun. 2021, 20, 269–283. [Google Scholar] [CrossRef]
- Han, S.; Mao, H.; Dally, W.J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv 2015, arXiv:1510.00149. [Google Scholar]
Optimization of Training Efficiency | Optimization of Model Accuracy | Scalability | Model Compression | Optimization of Tips Selection | |
---|---|---|---|---|---|
Literature [6] | ✔ | ✔ | ☓ | ☓ | ☓ |
Literature [11] | ☓ | ☓ | ✔ | ☓ | ☓ |
Literature [12] | ✔ | ✔ | ☓ | ✔ | ✔ |
CDAG-FL | ✔ | ✔ | ✔ | ✔ | ✔ |
Parameter | Set Up |
---|---|
Local epochs | 3 |
Local batches | 32 |
Batch size | 32 |
Learning rate | 0.001 |
Client per round | 10 |
Choose the number of tips per round | 3 |
60 | |
35 | |
5 | |
Fixed k value | 514 |
1024 | |
4 |
Method | Acc (%) | Acc ↑ (%) | Transmission Bits (MB) | Transmission Bits ↓ (%) |
---|---|---|---|---|
DAG-FL (Baseline) | 88.32 | 0 | 627.29 | 0 |
Fixed k-value | 89.19 | 0.98 | 527.44 | 15.91 |
Ours | 88.82 | 0.56 | 508.86 | 16.07 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, T.; Yang, C.; Wang, L.; Li, T.; Zhao, H.; Chen, J. Adaptive Quantization Mechanism for Federated Learning Models Based on DAG Blockchain. Electronics 2023, 12, 3712. https://doi.org/10.3390/electronics12173712
Li T, Yang C, Wang L, Li T, Zhao H, Chen J. Adaptive Quantization Mechanism for Federated Learning Models Based on DAG Blockchain. Electronics. 2023; 12(17):3712. https://doi.org/10.3390/electronics12173712
Chicago/Turabian StyleLi, Tong, Chao Yang, Lei Wang, Tingting Li, Hai Zhao, and Jiewei Chen. 2023. "Adaptive Quantization Mechanism for Federated Learning Models Based on DAG Blockchain" Electronics 12, no. 17: 3712. https://doi.org/10.3390/electronics12173712
APA StyleLi, T., Yang, C., Wang, L., Li, T., Zhao, H., & Chen, J. (2023). Adaptive Quantization Mechanism for Federated Learning Models Based on DAG Blockchain. Electronics, 12(17), 3712. https://doi.org/10.3390/electronics12173712