FedNow: An Efficiency-Aware Incentive Mechanism Enables Privacy Protection and Efficient Resource Utilization in Federated Edge Learning
Abstract
:1. Introduction
- We propose an efficiency-aware incentive mechanism, FedNow, which allows strategic edge servers to personally decide their local training rounds to improve resource utilization while compensating their training costs. Compared to existing works that only focus on improving model performance or incentivizing client engagement, we jointly optimize the training efficiency and expenditure costs, forming a joint optimization objective. Moreover, we take the multi-dimensional heterogeneous attributes, decision-making, and incomplete information of edge servers into account, which makes our work more practical.
- We model the optimization problem of FedNow as a mixed-integer nonlinear programming problem, which is nonconvex due to the involvement of mixed-integer variables and multiple incentive constraints. In order to solve this complex problem, we use contracts to determine the optimal payments for each type of edge server and design a scoring function to prioritize the selection of more efficient edge servers that are able to provide more data at lower costs. Then, to demonstrate that FedNow strictly outperforms existing schemes in terms of joint optimization objectives, we theoretically derive sufficient conditions for making the above situation true. Based on that, by integrating the Monte Carlo method, FedNow can obtain near-optimal feasible solutions and model parameters within a finite number of training rounds.
- We perform extensive experimental evaluations on one synthetic and three real datasets to demonstrate the advantages of FedNow in terms of resource utilization and training efficiency. The numerical results indicate that FedNow can converge at least 42.3% faster than our baselines when achieving the same model accuracy. Moreover, FedNow improves training efficiency and resource utilization by at least 15.53% and 22.78%, respectively.
2. Related Work
3. System Model
3.1. Overview
3.2. Incentive Model
3.3. Design Objectives
- Computational efficiency (CE). The contract mechanism can be terminated in polynomial time [33].
4. Optimal Design of Efficiency-Aware Incentive Mechanism
4.1. Optimal Rewards
- (i)
- (ii)
- and ;
- (iii)
- .
4.2. Efficiency Score Function Design
- (i)
- if and , there exists a threshold type , such that , where
- (ii)
- if the threshold type exists, there exists an optimal type and required data size such that , where is the platform’s optimal incentivized type set.
Algorithm 1 Calculation of the feasible solution |
|
Algorithm 2 The training process of FedNow |
|
5. Experiments
5.1. Experimental Setup
- UB: A vanilla contract incentive mechanism that assumes each edge server only conducts one round of local training (Theorem 1). We consider that it represents the upper-bound cost of any feasible strategy.
5.2. Evaluation of the Synthetic Dataset
5.3. Evaluation Using the Real Datasets
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
- (i)
- ,
- (ii)
- and ,
- (iii)
- ,
- (i)
- Necessity:
- (a)
- Condition (i) : according to the IR constraint, i.e., , we haveSo, Condition (i) holds.
- (b)
- Condition (ii) and : according to IC constraints, the following condition for the type-j edge servers must hold:Similarly, according to IC constraints, the following condition for the type-m edge servers must hold:Formula (A3) shows that if , then , .Based on the above analysis, we can know that if and only if .According to Formula (A4), for a feasible contract, and if and only if , . In other words, both and are negatively correlated with . So, Condition (ii) holds.
- (c)
- Condition (iii) : according to IC constraint, , for any two neighbor contract items, we haveSo, Condition (iii) holds.
- (ii)
- Sufficiency:
- (a)
- For the new type , the IR and IC constraints are satisfied, i.e.,
- (b)
- For the existing type , the IR and IC constraints are still satisfied when the type exists, i.e.,
Appendix B
Appendix C
Appendix D
- (i)
- For a given required data size, , according to Theorem 1, a solution is feasible when it is lower than , i.e.,
- (ii)
- Since the cost optimization problem of the cloud server involves two variables , if a threshold type exists under the current , then the cloud server can always search for a new solution that is not inferior (to such a feasible solution) by adjusting its data requirement, . Therefore, under a feasible , the cloud server’s optimal incentivized type set can be formalized as , which ensures that holds. Hence, the validity of (ii) is proven.
References
- Pantelopoulos, A.; Bourbakis, N.G. A survey on wearable sensor-based systems for health monitoring and prognosis. IEEE Trans. Syst. Man, Cybern. Part C Appl. Rev. 2009, 40, 1–12. [Google Scholar] [CrossRef]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. A public domain dataset for human activity recognition using smartphones. In Proceedings of the Esann, Bruges, Belgium, 24–26 April 2013; Volume 3, p. 3. [Google Scholar]
- Zhu, G.; Liu, D.; Du, Y.; You, C.; Zhang, J.; Huang, K. Toward an intelligent edge: Wireless communication meets machine learning. IEEE Commun. Mag. 2020, 58, 19–25. [Google Scholar] [CrossRef]
- Ma, Z.; Xu, Y.; Xu, H.; Meng, Z.; Huang, L.; Xue, Y. Adaptive batch size for federated learning in resource-constrained edge computing. IEEE Trans. Mob. Comput. 2023, 22, 37–53. [Google Scholar] [CrossRef]
- Lu, Y.; Huang, X.; Dai, Y.; Maharjan, S.; Zhang, Y. Federated learning for data privacy preservation in vehicular cyber-physical systems. IEEE Network 2020, 34, 50–56. [Google Scholar] [CrossRef]
- Xiao, Y.; Zhang, X.; Li, Y.; Shi, G.; Krunz, M.; Nguyen, D.N.; Hoang, D.T. Time-sensitive learning for heterogeneous federated edge intelligence. IEEE Trans. Mob. Comput. 2023. early access. [Google Scholar] [CrossRef]
- Luo, S.; Chen, X.; Wu, Q.; Zhou, Z.; Yu, S. Hfel: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning. IEEE Trans. Wirel. Commun. 2020, 19, 6535–6548. [Google Scholar] [CrossRef]
- Lyu, L.; Chen, C. A novel attribute reconstruction attack in federated learning. arXiv 2021, arXiv:2108.06910. [Google Scholar]
- Varshney, P.; Simmhan, Y. Characterizing application scheduling on edge, fog, and cloud computing resources. Softw. Pract. Exp. 2020, 50, 558–595. [Google Scholar] [CrossRef]
- Stich, S.U. Local sgd converges fast and communicates little. arXiv 2018, arXiv:1805.09767. [Google Scholar]
- Liu, B.; Shen, W.; Li, P.; Zhu, X. Accelerate mini-batch machine learning training with dynamic batch size fitting. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
- Ho, Q.; Cipar, J.; Cui, H.; Lee, S.; Kim, J.K.; Gibbons, P.B.; Gibson, G.A.; Ganger, G.; Xing, E.P. More effective distributed ml via a stale synchronous parallel parameter server. Adv. Neural Inf. Process. Syst. 2013, 26, 1223–1231. [Google Scholar]
- Zhang, J.; Tu, H.; Ren, Y.; Wan, J.; Zhou, L.; Li, M.; Wang, J. An adaptive synchronous parallel strategy for distributed machine learning. IEEE Access 2018, 6, 19222–19230. [Google Scholar] [CrossRef]
- Chen, C.; Wang, W.; Li, B. Round-robin synchronization: Mitigating communication bottlenecks in parameter servers. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer Communications, Paris, France, 29 April–2 May 2019; pp. 532–540. [Google Scholar]
- Wang, S.; Tuor, T.; Salonidis, T.; Leung, K.K.; Makaya, C.; He, T.; Chan, K. Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 2019, 37, 1205–1221. [Google Scholar] [CrossRef]
- Tran, N.H.; Bao, W.; Zomaya, A.; Nguyen, M.N.; Hong, C.S. Federated learning over wireless networks: Optimization model design and analysis. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer Communications, Paris, France, 29 April–2 May 2019; pp. 1387–1395. [Google Scholar]
- Zeng, R.; Zhang, S.; Wang, J.; Chu, X. Fmore: An incentive scheme of multi-dimensional auction for federated learning in mec. In Proceedings of the 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS), Singapore, 29 November–1 December 2020; pp. 278–288. [Google Scholar]
- Kang, J.; Xiong, Z.; Niyato, D.; Xie, S.; Zhang, J. Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory. IEEE Internet Things J. 2019, 6, 10700–10714. [Google Scholar] [CrossRef]
- Tang, M.; Wong, V.W. An incentive mechanism for cross-silo federated learning: A public goods perspective. In Proceedings of the IEEE INFOCOM 2021—IEEE Conference on Computer Communications, Vancouver, BC, Canada, 10–13 May 2021; pp. 1–10. [Google Scholar]
- Wang, Y.; Su, Z.; Luan, T.H.; Li, R.; Zhang, K. Federated learning with fair incentives and robust aggregation for uav-aided crowdsensing. IEEE Trans. Netw. Sci. Eng. 2021, 9, 3179–3196. [Google Scholar] [CrossRef]
- Xiao, Y.; Shi, G.; Li, Y.; Saad, W.; Poor, H.V. Toward self-learning edge intelligence in 6g. IEEE Commun. Mag. 2020, 58, 34–40. [Google Scholar] [CrossRef]
- Cipar, J.; Ho, Q.; Kim, J.K.; Lee, S.; Ganger, G.R.; Gibson, G.; Keeton, K.; Xing, E. Solving the straggler problem with bounded staleness. In Proceedings of the 14th Workshop on Hot Topics in Operating Systems (HotOS XIV), Santa Ana Pueblo, NM, USA, 13–15 May 2013. [Google Scholar]
- Tyagi, S.; Sharma, P. Taming resource heterogeneity in distributed ml training with dynamic batching. In Proceedings of the 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), Washington, DC, USA, 17–21 August 2020; pp. 188–194. [Google Scholar]
- Ye, Q.; Zhou, Y.; Shi, M.; Sun, Y.; Lv, J. Dbs: Dynamic batch size for distributed deep neural network training. arXiv 2020, arXiv:2007.11831. [Google Scholar]
- Xia, W.; Quek, T.Q.; Guo, K.; Wen, W.; Yang, H.H.; Zhu, H. Multi-armed bandit-based client scheduling for federated learning. IEEE Trans. Wirel. Commun. 2020, 19, 7108–7123. [Google Scholar] [CrossRef]
- Shi, W.; Zhou, S.; Niu, Z. Device scheduling with fast convergence for wireless federated learning. In Proceedings of the ICC 2020—2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar]
- Chen, M.; Poor, H.V.; Saad, W.; Cui, S. Convergence time minimization of federated learning over wireless networks. In Proceedings of the ICC 2020—2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar]
- Feng, S.; Niyato, D.; Wang, P.; Kim, D.I.; Liang, Y.-C. Joint service pricing and cooperative relay communication for federated learning. In Proceedings of the 2019 International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Atlanta, GA, USA, 14–17 July 2019; pp. 815–820. [Google Scholar]
- Sarikaya, Y.; Ercetin, O. Motivating workers in federated learning: A stackelberg game perspective. IEEE Netw. Lett. 2019, 2, 23–27. [Google Scholar] [CrossRef]
- Ding, N.; Fang, Z.; Huang, J. Optimal contract design for efficient federated learning with multi-dimensional private information. IEEE J. Sel. Areas Commun. 2020, 39, 186–200. [Google Scholar] [CrossRef]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) 2017, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
- Carli, R.; Chiuso, A.; Schenato, L.; Zampieri, S. A pi consensus controller for networked clocks synchronization. IFAC Proc. Vol. 2008, 41, 10289–10294. [Google Scholar] [CrossRef]
- Wang, D.; Ren, J.; Wang, Z.; Wang, Y.; Zhang, Y. Privaim: A dual-privacy preserving and quality-aware incentive mechanism for federated learning. IEEE Trans. Comput. 2022, 72, 1913–1927. [Google Scholar] [CrossRef]
- Ding, N.; Gao, L.; Huang, J. Joint participation incentive and network pricing design for federated learning. In Proceedings of the IEEE INFOCOM 2023—IEEE Conference on Computer Communications, New York City, NY, USA, 17–20 May 2023; pp. 1–10. [Google Scholar]
- Li, M.; Zhang, T.; Chen, Y.; Smola, A.J. Efficient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 24–27 August 2014; pp. 661–670. [Google Scholar]
- Dekel, O.; Gilad-Bachrach, R.; Shamir, O.; Xiao, L. Optimal distributed online prediction using mini-batches. J. Mach. Learn. Res. 2012, 13, 165–202. [Google Scholar]
- Wang, X.; Zhao, Y.; Qiu, C.; Liu, Z.; Nie, J.; Leung, V.C. Infedge: A blockchain-based incentive mechanism in hierarchical federated learning for end-edge-cloud communications. IEEE J. Sel. Areas Commun. 2022, 40, 3325–33422. [Google Scholar] [CrossRef]
- Lu, J.; Liu, H.; Jia, R.; Zhang, Z.; Wang, X.; Wang, J. Incentivizing proportional fairness for multi-task allocation in crowdsensing. IEEE Trans. Serv. Comput. 2023. [Google Scholar] [CrossRef]
- Lu, J.; Liu, H.; Jia, R.; Wang, J.; Sun, L.; Wan, S. Towards personalized federated learning via group collaboration in iiot. IEEE Trans. Ind. Inform. 2023, 19, 8923–8932. [Google Scholar] [CrossRef]
- Lu, J.; Liu, H.; Zhang, Z.; Wang, J.; Goudos, S.K.; Wan, S. Toward fairness-aware time-sensitive asynchronous federated learning for critical energy infrastructure. IEEE Trans. Ind. Inform. 2022, 18, 3462–3472. [Google Scholar] [CrossRef]
- Ying, C.; Jin, H.; Wang, X.; Luo, Y. Double insurance: Incentivized federated learning with differential privacy in mobile crowdsensing. In Proceedings of the 2020 International Symposium on Reliable Distributed Systems (SRDS), Shanghai, China, 21–24 September 2020; pp. 81–90. [Google Scholar]
- Huang, T.; Lin, W.; Wu, W.; He, L.; Li, K.; Zomaya, A.Y. An efficiency-boosting client selection scheme for federated learning with fairness guarantee. IEEE Trans. Parallel Distrib. Syst. 2020, 32, 1552–1564. [Google Scholar] [CrossRef]
- Ma, C.; Konečnỳ, J.; Jaggi, M.; Smith, V.; Jordan, M.I.; Richtárik, P.; Takáč, M. Distributed optimization with arbitrary local solvers. Optim. Methods Softw. 2017, 32, 813–848. [Google Scholar] [CrossRef]
- Sultana, A.; Haque, M.M.; Chen, L.; Xu, F.; Yuan, X. Eiffel: Efficient and fair scheduling in adaptive federated learning. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 4282–4294. [Google Scholar] [CrossRef]
Variable | Physical Meanings |
---|---|
, , | The number set, type set, and incentivized type set of edge servers. |
, | The set of contract items; contract items for type-j edge servers. |
, | The unit training costs and local training rounds of type-j edge servers. |
, | The required data size and corresponding payments for type-j edge servers. |
, | The upper limit of local training rounds; the longest allowable training time. |
, | The margin efficiency costs of type-j edge servers; the approx marginal efficiency costs. |
The conditional probability of edge servers with training strategy r choosing training strategy. | |
The serial number corresponding to the type-j edge servers in score queue . |
Ref. | LR | D | acc.l | P | ||
---|---|---|---|---|---|---|
UB | {1, ⋯, 10} | 1 | 10.3583 | 0.3107 | 9.2625 | 0.1152 |
Control Group | {1, 2, 3} | 2.4602 | 0.6375 | 6.5252 | 0.1231 | |
CD | {1, 2, 3, 4, 5} | 7.5857 | 0.363 | 7.5447 | 0.1048 | |
MEC | {1, 3, 5, 10} | 10.4166 | 0.3098 | 7.0024 | 0.0946 | |
FedNow | {1, 2, 3, 10} | 9.0836 | 0.3317 | 7.4569 | 0.1009 |
MNIST-MLP | FMNIST-CNN | CIFAR10-RESNET | ||||
---|---|---|---|---|---|---|
Efficiency | Test_acc | Efficiency | Test_acc | Efficiency | Test_acc | |
UB | 0.4109 | 0.9546 | 0.4132 | 0.9800 | 0.4787 | 0.7874 |
CD | 0.6672 | 0.9667 | 0.6780 | 0.9822 | 0.7010 | 0.8435 |
FedNow | 0.8247 | 0.9833 | 0.8027 | 0.9838 | 0.8357 | 0.8332 |
Mathematical Terms | Physical Meanings |
---|---|
, , , | The prediction loss on a pair of data samples d, the loss function of edge server j, the model parameter, and the optimal model parameters. |
, , , | The total costs of edge server i in t-th global epoch; the unit computation cost, unit data-collection cost, and unit communication cost of edge server i. |
, | The global model size (fixed), and the number of edge servers contained in the type set j. |
, , | The computation time for a type-j edge server to perform one local training round, the number of CPU cycles required for performing one sample of data, and the CPU-cycle frequency of a type-j edge server. |
, | The total utilities of type-j edge servers; the unit-round utility of type-j edge servers, equivalent to the utility of completing contract per time. |
, , | The platform’s weight regarding the rewards and the optimal data size/rewards for type-j edge servers. |
, | The utility function of the platform, equivalent to the platform’s total costs; the cost upper-bound of the platform. |
The global sensitivity of the score function. | |
, , | The real training rounds of type-j edge servers, the expected training rounds of type-j edge servers, the set of possible training rounds. |
, | The total costs of the platform when the incentivizing set is and , respectively. |
The general upper bound of global iterations; : global accuracy, : local accuracy. | |
, | Waiting time in the t-th epoch and the overall average waiting time. |
, , | Epoch time in the t-th epoch, the overall average epoch time, and the training time of the slowest edge server in the t-th epoch. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lu, J.; Yuan, W.; Cao, S.; Zhou, P. FedNow: An Efficiency-Aware Incentive Mechanism Enables Privacy Protection and Efficient Resource Utilization in Federated Edge Learning. Appl. Sci. 2024, 14, 494. https://doi.org/10.3390/app14020494
Lu J, Yuan W, Cao S, Zhou P. FedNow: An Efficiency-Aware Incentive Mechanism Enables Privacy Protection and Efficient Resource Utilization in Federated Edge Learning. Applied Sciences. 2024; 14(2):494. https://doi.org/10.3390/app14020494
Chicago/Turabian StyleLu, Jianfeng, Wenxuan Yuan, Shuqin Cao, and Pan Zhou. 2024. "FedNow: An Efficiency-Aware Incentive Mechanism Enables Privacy Protection and Efficient Resource Utilization in Federated Edge Learning" Applied Sciences 14, no. 2: 494. https://doi.org/10.3390/app14020494
APA StyleLu, J., Yuan, W., Cao, S., & Zhou, P. (2024). FedNow: An Efficiency-Aware Incentive Mechanism Enables Privacy Protection and Efficient Resource Utilization in Federated Edge Learning. Applied Sciences, 14(2), 494. https://doi.org/10.3390/app14020494