Next Article in Journal
Exploring Zero-Shot Semantic Segmentation with No Supervision Leakage
Next Article in Special Issue
Adaptive Quantization Mechanism for Federated Learning Models Based on DAG Blockchain
Previous Article in Journal
Internet of Things Gateway Edge for Movement Monitoring in a Smart Healthcare System
Previous Article in Special Issue
MFLCES: Multi-Level Federated Edge Learning Algorithm Based on Client and Edge Server Selection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computation and Communication Efficient Adaptive Federated Optimization of Federated Learning for Internet of Things

1
State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China
2
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(16), 3451; https://doi.org/10.3390/electronics12163451
Submission received: 21 July 2023 / Revised: 11 August 2023 / Accepted: 13 August 2023 / Published: 15 August 2023

Abstract

The proliferation of the Internet of Things (IoT) and widespread use of devices with sensing, computing, and communication capabilities have motivated intelligent applications empowered by artificial intelligence. Classical artificial intelligence algorithms require centralized data collection and processing, which are challenging in realistic intelligent IoT applications due to growing data privacy concerns and distributed datasets. Federated Learning (FL) has emerged as a privacy-preserving distributed learning framework, which enables IoT devices to train global models through sharing model parameters. However, inefficiency due to frequent parameter transmissions significantly reduces FL performance. Existing acceleration algorithms consist of two main types including local update and parameter compression, which considers the trade-offs between communication and computation/precision, respectively. Jointly considering these two trade-offs and adaptively balancing their impacts on convergence have remained unresolved. To solve the problem, this paper proposes a novel efficient adaptive federated optimization (FedEAFO) algorithm to improve the efficiency of FL, which minimizes the learning error via jointly considering two variables including local update and parameter compression. The FedEAFO enables FL to adaptively adjust two variables and balance trade-offs among computation, communication, and precision. The experiment results illustrate that compared with state-of-the-art algorithms, the FedEAFO can achieve higher accuracies faster.
Keywords: federated learning; distributed machine learning; communication efficiency; privacy protection federated learning; distributed machine learning; communication efficiency; privacy protection

Share and Cite

MDPI and ACS Style

Chen, Z.; Cui, H.; Wu, E.; Yu, X. Computation and Communication Efficient Adaptive Federated Optimization of Federated Learning for Internet of Things. Electronics 2023, 12, 3451. https://doi.org/10.3390/electronics12163451

AMA Style

Chen Z, Cui H, Wu E, Yu X. Computation and Communication Efficient Adaptive Federated Optimization of Federated Learning for Internet of Things. Electronics. 2023; 12(16):3451. https://doi.org/10.3390/electronics12163451

Chicago/Turabian Style

Chen, Zunming, Hongyan Cui, Ensen Wu, and Xi Yu. 2023. "Computation and Communication Efficient Adaptive Federated Optimization of Federated Learning for Internet of Things" Electronics 12, no. 16: 3451. https://doi.org/10.3390/electronics12163451

APA Style

Chen, Z., Cui, H., Wu, E., & Yu, X. (2023). Computation and Communication Efficient Adaptive Federated Optimization of Federated Learning for Internet of Things. Electronics, 12(16), 3451. https://doi.org/10.3390/electronics12163451

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop