Next Article in Journal
Design of an Internal Asynchronous 11-Bit SAR ADC for Biomedical Wearable Application
Previous Article in Journal
A Brief Review on Differentiable Rendering: Recent Advances and Challenges
Previous Article in Special Issue
A Personalized Federated Learning Method Based on Knowledge Distillation and Differential Privacy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Greedy Hierarchical Federated Learning Training Method Based on Trusted Execution Environments

by
Jiaxing Yan
1,2,
Yan Li
1,
Sifan Yin
1,
Xin Kang
1,
Jiachen Wang
1,
Hao Zhang
1 and
Bin Hu
1,*
1
School of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
2
Henan Key Laboratory of Network Cryptography Technology, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(17), 3548; https://doi.org/10.3390/electronics13173548
Submission received: 6 August 2024 / Revised: 31 August 2024 / Accepted: 2 September 2024 / Published: 6 September 2024
(This article belongs to the Special Issue Novel Methods Applied to Security and Privacy Problems)

Abstract

With the continuous development of artificial intelligence, effectively solving the problem of data islands under the premise of protecting user data privacy has become a top priority. Federal learning is an effective solution to the two significant dilemmas of data islands and data privacy protection. However, there are still some security problems in federal learning. Therefore, this study simulates the data distribution in a hardware-based trusted execution environment in the real world through two processing methods: independent identically distributed and non-independent identically distributed methods. The basic model uses ResNet164 and innovatively introduces a greedy hierarchical training strategy to gradually train and aggregate complex models to ensure that the training of each layer is optimized under the premise of protecting privacy. The experimental results show that under the condition of an IID data distribution, the final accuracy of the greedy hierarchical model reaches 86.72%, which is close to the accuracy of the unpruned model at 89.60%. In contrast, under the non-IID condition, the model’s performance decreases. Overall, the TEE-based hierarchical federated learning method shows reasonable practicability and effectiveness in a resource-constrained environment. Through this study, the advantages of the greedy hierarchical federated learning model with regard to enhancing data privacy protection, optimizing resource utilization, and improving model training efficiency are further verified, providing new ideas and methods for solving the data island and data privacy protection problems.
Keywords: federated learning; TEE; ResNet164; greedy layered training federated learning; TEE; ResNet164; greedy layered training

Share and Cite

MDPI and ACS Style

Yan, J.; Li, Y.; Yin, S.; Kang, X.; Wang, J.; Zhang, H.; Hu, B. An Efficient Greedy Hierarchical Federated Learning Training Method Based on Trusted Execution Environments. Electronics 2024, 13, 3548. https://doi.org/10.3390/electronics13173548

AMA Style

Yan J, Li Y, Yin S, Kang X, Wang J, Zhang H, Hu B. An Efficient Greedy Hierarchical Federated Learning Training Method Based on Trusted Execution Environments. Electronics. 2024; 13(17):3548. https://doi.org/10.3390/electronics13173548

Chicago/Turabian Style

Yan, Jiaxing, Yan Li, Sifan Yin, Xin Kang, Jiachen Wang, Hao Zhang, and Bin Hu. 2024. "An Efficient Greedy Hierarchical Federated Learning Training Method Based on Trusted Execution Environments" Electronics 13, no. 17: 3548. https://doi.org/10.3390/electronics13173548

APA Style

Yan, J., Li, Y., Yin, S., Kang, X., Wang, J., Zhang, H., & Hu, B. (2024). An Efficient Greedy Hierarchical Federated Learning Training Method Based on Trusted Execution Environments. Electronics, 13(17), 3548. https://doi.org/10.3390/electronics13173548

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop