Next Article in Journal
New Insights into Laryngeal Articulation and Breathing Control of Trumpeters: Biomedical Signals and Auditory Perception
Previous Article in Journal
Beyond Presence: Exploring Empathy within the Metaverse
Previous Article in Special Issue
Protecting Instant Messaging Notifications against Physical Attacks: A Novel Instant Messaging Notification Protocol Based on Signal Protocol
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

A Reliable Aggregation Method Based on Threshold Additive Secret Sharing in Federated Learning with Quality of Service (QoS) Support

by
Yu-Ting Ting
,
Po-Wen Chi
* and
Chien-Ting Kuo
Department of Computer Science and Information Engineering, National Taiwan Normal University, Taipei 116, Taiwan
*
Author to whom correspondence should be addressed.
IEEE Member.
Appl. Sci. 2024, 14(19), 8959; https://doi.org/10.3390/app14198959
Submission received: 12 August 2024 / Revised: 27 September 2024 / Accepted: 2 October 2024 / Published: 4 October 2024
(This article belongs to the Special Issue Cryptography in Data Protection and Privacy-Enhancing Technologies)

Abstract

Federated learning is a decentralized privacy-preserving mechanism that allows multiple clients to collaborate without exchanging their datasets. Instead, they jointly train a model by uploading their own gradients. However, recent research has shown that attackers can use clients’ gradients to reconstruct the original training data, compromising the security of federated learning. Thus, there has been an increasing number of studies aiming to protect gradients using different techniques. One common technique is secret sharing. However, it has been shown in previous research that when using secret sharing to protect gradient privacy, the original gradient cannot be reconstructed when one share is lost or a server is damaged, causing federated learning to be interrupted. In this paper, we propose an approach that involves using additive secret sharing for federated learning gradient aggregation, making it difficult for attackers to easily access clients’ original gradients. Additionally, our proposed method ensures that any server damage or loss of gradient shares are unlikely to impact the federated learning operation, within a certain probability. We also added a membership level system, allowing members of varying levels to ultimately obtain models with different accuracy levels.
Keywords: federated learning; secret sharing; secure multiparty computation federated learning; secret sharing; secure multiparty computation

Share and Cite

MDPI and ACS Style

Ting, Y.-T.; Chi, P.-W.; Kuo, C.-T. A Reliable Aggregation Method Based on Threshold Additive Secret Sharing in Federated Learning with Quality of Service (QoS) Support. Appl. Sci. 2024, 14, 8959. https://doi.org/10.3390/app14198959

AMA Style

Ting Y-T, Chi P-W, Kuo C-T. A Reliable Aggregation Method Based on Threshold Additive Secret Sharing in Federated Learning with Quality of Service (QoS) Support. Applied Sciences. 2024; 14(19):8959. https://doi.org/10.3390/app14198959

Chicago/Turabian Style

Ting, Yu-Ting, Po-Wen Chi, and Chien-Ting Kuo. 2024. "A Reliable Aggregation Method Based on Threshold Additive Secret Sharing in Federated Learning with Quality of Service (QoS) Support" Applied Sciences 14, no. 19: 8959. https://doi.org/10.3390/app14198959

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop