Next Article in Journal
Photovoltaic Maximum Power Point Tracking Technology Based on Improved Perturbation Observation Method and Backstepping Algorithm
Previous Article in Journal
Optimizing Routing Protocol Design for Long-Range Distributed Multi-Hop Networks
Previous Article in Special Issue
Survey of Deep Learning Accelerators for Edge and Emerging Computing
 
 
Article
Peer-Review Record

AWDP-FL: An Adaptive Differential Privacy Federated Learning Framework

Electronics 2024, 13(19), 3959; https://doi.org/10.3390/electronics13193959
by Zhiyan Chen, Hong Zheng * and Gang Liu
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4: Anonymous
Electronics 2024, 13(19), 3959; https://doi.org/10.3390/electronics13193959
Submission received: 15 August 2024 / Revised: 23 September 2024 / Accepted: 7 October 2024 / Published: 8 October 2024
(This article belongs to the Special Issue AI for Edge Computing)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper proposes a new federated learning framework, Adaptive Weighted Differential Privacy Federated Learning (AWDP-FL), which aims to solve the balance between data privacy protection and model performance. The framework controls the gradient size by dynamically adjusting the cropping threshold, so as to achieve effective privacy protection. After gradient clipping, an adaptive mechanism is used to update the gradient, which can improve the performance of the model and ensure data privacy. Dynamic Gaussian noise is added when uploading model parameters to enhance privacy protection while maintaining the accuracy of the model. The server-side updates the global model by aggregating the perturbation parameters of all participants, ensuring that the high performance of the model is maintained while protecting privacy. Through experiments, the AWDP-FL framework can achieve good model performance while strictly protecting privacy, which provides strong technical support for application scenarios that need to pay close attention to data security.You also need to pay attention to the format of formula and chart names.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

+         The abstract effectively outlines the research's objectives, methods, and key findings. The overall writing logic is smooth and easy to read.

+         The three pseudocodes given in the paper can fully show the process and ideas of the method, which is very helpful for the reader to understand.

+         The author has made sufficient theoretical foreshadowing before the experimental demonstration, and the structure of the article is relatively complete.

+         The experimental results demonstrate improvements of AWDP-FL over current federated learning methods that apply differential privacy. 

-          The references are not fully listed in the article. For example, I did not see where [23] and [24] were cited. The author needs to further check the reference information to ensure that it is complete and correct. In addition, it’s better to pay more attention to the latest thesis in these two years.

-          The framework of the article before the formal introduction of the method needs to be further optimized, because the current introduction and previous work are not clear. In the “Introduction” and “Relevant Technical Theories” chapter, a considerable part of the content can be considered to be rewritten as Related Work and Preliminary.

-          At the end of the Introduction, the first two points of the article contribution point stated by the author does not reflect the degree of differentiation, and the statement can be further strengthened.

-          In the experimental part, the authors can consider adding a comparison of communication overhead, time overhead, and other dimensions. What's more, since the paper work involves differential privacy, different noise intensities may also affect the experimental results. The comparison of multiple dimensions is more beneficial to help the readers understand the comprehensive performance of the author's method.

-          In 3.1(4), the authors mention that the server only randomly selects a subset of clients at a time to broadcast model parameters. So what's the ratio? Is it possible to set it as a hyperparameter in the experiment section?

-          Note the network structure mentioned by the author in Section 4.1, and one of the highlights of the work in this paper is the gradient clipping optimization at the neural network layer level. So, will different neural network layers also have an impact on adaptive clipping optimization? If so, it also needs to be shown as a dimension of experimental comparison.

-          In Section 4.2.1, the author has defined three values as a measure of the privacy budget. This is reasonable to some extent, but a more detailed explanation may be required. In fact, in many cases, privacy budgets are related to the amount of data, the number of training rounds, and can be calculated automatically. The ideas that the authors can formulate include the following: (a) AWDP-FL can achieve the same accuracy with less privacy budget. Or (b) the actual privacy cost paid by AWDP-FL in any case did not exceed the privacy budget.

-          It would be better to prove github link of code to help readers reproduce and evaluate model.

 

 

 

Comments on the Quality of English Language

In some details, the manuscritp needs to be further optimized and improved.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors





4) 





 

 

Comments on the Quality of English Language

There are examples of areas where enhancements for language use could be done to increase comprehensibility. There is a strong rationale to take time and proofread the work to check on the grammar, the construction of sentences as well as how the different styles have been used.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

The paper addresses the critical issue of optimizing federated learning model performance while ensuring data privacy, proposing an innovative framework. After a thorough review, I offer the following comments and suggestions:

1.Originality and Contribution:

The manuscript presents a novel framework for federated learning that incorporates adaptive differential privacy, which is a significant contribution to the field of data privacy. The authors clearly articulate the advantages of their framework over existing techniques, which is commendable.

2.Theoretical Analysis and Methodology:

The authors provide a detailed description of the AWDP-FL framework and include pseudocode for the algorithms, aiding the reader's understanding of the proposed method's implementation. However, it is recommended that the authors further elaborate on the rationale behind the selection of key parameters and their impact, such as the choice of momentum update parameters τt1, τt2, ρ.

3.Experimental Design and Results:

The experiments utilize three public datasets (MNIST, Fashion-MNIST, and CIFAR-10), providing a solid foundation for validating the effectiveness of the proposed method. The authors also compare model performance across different privacy budgets, which is comprehensive. However, the authors are encouraged to include more datasets and network architectures in the experiments to further verify the generalizability of their method.

The results indicate that AWDP-FL outperforms existing methods across multiple metrics, but the authors are advised to explore the method's performance under different data distributions and scales.

4.Privacy Protection Analysis:

The paper provides a detailed analysis of privacy protection, including the client parameter upload phase and the server parameter distribution phase. However, the authors are encouraged to further discuss the types of privacy attacks that might be encountered in real-world deployments and how the AWDP-FL framework can defend against these attacks.

5. Discussion and Future Work:

The authors propose directions for future work in the discussion section, including more precise methods for measuring privacy loss and differential privacy strategies in heterogeneous federated learning scenarios. This is a positive direction, and the authors are advised to further clarify the potential impact and expected goals of these future works.

6.Writing and Organization:

The manuscript is well-organized and logically structured. However, the authors are advised to check for grammatical and spelling errors throughout the paper to ensure its professionalism.

It is recommended that the authors provide more background information on the limitations of existing methods in the introduction to better highlight the innovation of their proposed method.

7.Figures and Visualization:

The figures and visualizations in the paper help to illustrate the experimental results, but the authors are advised to check the clarity of the figures and the accuracy of the labels to ensure they are clearly understood by reaaders in the final publication.

8. Ensure that all relevant and key literature in the field is cited to demonstrate the depth and breadth of the research. Please check the consistency and accuracy of citations to ensure that all related work has been properly acknowledged and referenced as follows: The DOI number as: 10.1109/TCCN.2022.3164880 10.1109/TCSII.2022.3152522, 10.1109/TAES.2023.3266409, 10.1109/TAES.2024.3387447, 10.1109/TCSII.2022.3152522, 10.1016/j.dsp.2021.102994, 10.1016/j.sigpro.2022.108673, 10.1016/j.eswa.2024.124151 .

Comments on the Quality of English Language

Moderate editing of English language required.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

The authors have made significant improvements, specifically in enhancing the clarity and depth of their explanation. The adaptive mechanisms have been explained in a more detailed manner. Furthermore, the results have been expanded.

Final remarks:

1. Analysis of the proposed algorithm complexity. Kindly, explain potential computational costs or overhead.

2.  While the privacy aspects are well-discussed, it would be beneficial to expand the security analysis. Explain how robust the proposed framework against advanced privacy attacks is.

Comments on the Quality of English Language

Minor.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

No more comments.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop