Next Article in Journal
A Novel Discretization Procedure in the CSI-FEM Algorithm for Brain Stroke Microwave Imaging
Next Article in Special Issue
Recent Implementations of Hydrogel-Based Microbial Electrochemical Technologies (METs) in Sensing Applications
Previous Article in Journal
Screening of Discrete Wavelet Transform Parameters for the Denoising of Rolling Bearing Signals in Presence of Localised Defects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems

1
Department of Computer Science, Lakehead University, Thunder Bay, ON P7B 5E1, Canada
2
Thunder Bay Regional Health Research Institute (TBRHRI), Thunder Bay, ON P7B 7A5, Canada
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(1), 6; https://doi.org/10.3390/s23010006
Submission received: 8 November 2022 / Revised: 11 December 2022 / Accepted: 16 December 2022 / Published: 20 December 2022
(This article belongs to the Special Issue IoT Sensors Development and Application for Environment & Safety)

Abstract

Deep learning-based Human Activity Recognition (HAR) systems received a lot of interest for health monitoring and activity tracking on wearable devices. The availability of large and representative datasets is often a requirement for training accurate deep learning models. To keep private data on users’ devices while utilizing them to train deep learning models on huge datasets, Federated Learning (FL) was introduced as an inherently private distributed training paradigm. However, standard FL (FedAvg) lacks the capability to train heterogeneous model architectures. In this paper, we propose Federated Learning via Augmented Knowledge Distillation (FedAKD) for distributed training of heterogeneous models. FedAKD is evaluated on two HAR datasets: A waist-mounted tabular HAR dataset and a wrist-mounted time-series HAR dataset. FedAKD is more flexible than standard federated learning (FedAvg) as it enables collaborative heterogeneous deep learning models with various learning capacities. In the considered FL experiments, the communication overhead under FedAKD is 200X less compared with FL methods that communicate models’ gradients/weights. Relative to other model-agnostic FL methods, results show that FedAKD boosts performance gains of clients by up to 20 percent. Furthermore, FedAKD is shown to be relatively more robust under statistical heterogeneous scenarios.
Keywords: deep learning; federated learning; knowledge distillation; human activity recognition; Internet of Things; hyperparameter tuning deep learning; federated learning; knowledge distillation; human activity recognition; Internet of Things; hyperparameter tuning
Graphical Abstract

Share and Cite

MDPI and ACS Style

Gad, G.; Fadlullah, Z. Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems. Sensors 2023, 23, 6. https://doi.org/10.3390/s23010006

AMA Style

Gad G, Fadlullah Z. Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems. Sensors. 2023; 23(1):6. https://doi.org/10.3390/s23010006

Chicago/Turabian Style

Gad, Gad, and Zubair Fadlullah. 2023. "Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems" Sensors 23, no. 1: 6. https://doi.org/10.3390/s23010006

APA Style

Gad, G., & Fadlullah, Z. (2023). Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems. Sensors, 23(1), 6. https://doi.org/10.3390/s23010006

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop