Federated Learning Strategies for Machine Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 31 January 2025 | Viewed by 335

Special Issue Editors


E-Mail Website
Guest Editor
Department of Automatic Control and Applied Informatics, Gheorghe Asachi Technical University of Iasi, 70050 Iasi, Romania
Interests: machine learning; artificial intelligence; optimisation; evolutionary computation; modelling; computer vision

E-Mail Website
Guest Editor
Department of Automatic Control and Applied Informatics, Gheorghe Asachi Technical University of Iasi, 70050 Iasi, Romania
Interests: model predictive control; networked/distributed control systems; cooperative systems; connected and automated mobility; vehicle connectivity; 5G applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Automatic Control and Applied Informatics, Gheorghe Asachi Technical University of Iasi, 70050 Iasi, Romania
Interests: distributed AI; robotics; reinforcement learning; knowledge representation and reasoning; generative AI; 5G and AI applications

Special Issue Information

Dear Colleagues,

Federated learning introduces new perspectives in machine learning by enabling model training across decentralised devices. Participants only share the model updates without exchanging local data samples in full compliance with the General Data Protection Regulation. This approach offers many advantages for large-scale applications, from preserving data privacy to providing faster training and effective learning transfer among models exposed to data collected in different environments.

Advanced techniques are still needed to integrate this paradigm into real-world scenarios and effectively address communication overhead issues, privacy and security vulnerabilities, the lack of standardisation, and model aggregation for non-independent and identically distributed data.

In this Special Issue, original articles and reviews related to (but not limited to) the following topics are welcome:

  1. Federated learning algorithms;
  2. Federated reinforcement learning;
  3. Federated generative model;
  4. Privacy and security in federated learning;
  5. Communication efficiency for federated learning;
  6. Federated learning frameworks;
  7. Standardisation and interoperability;
  8. Transfer of learning;
  9. Multi-task learning;
  10. Domain adaptation;
  11. Evaluation metrics in transfer learning;
  12. Online and incremental model aggregation;
  13. Model and data fusion;
  14. Interpretability in aggregated models;
  15. Ensemble learning;
  16. Advanced learning algorithms;
  17. Distributed learning algorithms;
  18. Scalability and performance in distributed learning;
  19. Real-world machine learning applications with data privacy constraints;
  20. Federated learning applications.

We look forward to receiving your contributions.

Dr. Lavinia Ferariu
Prof. Dr. Constantin Florin Caruntu
Dr. Carlos Pascal
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • federated learning algorithms
  • federated reinforcement learning
  • federated generative model
  • privacy and security in federated learning
  • communication efficiency for federated learning
  • federated learning frameworks
  • standardisation and interoperability
  • multi-task learning
  • evaluation metrics in transfer learning
  • advanced learning algorithms
  • distributed learning algorithms
  • scalability and performance in distributed learning
  • real-world machine learning applications with data privacy constraints

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 14457 KiB  
Article
FedUB: Federated Learning Algorithm Based on Update Bias
by Hesheng Zhang, Ping Zhang, Mingkai Hu, Muhua Liu and Jiechang Wang
Mathematics 2024, 12(10), 1601; https://doi.org/10.3390/math12101601 - 20 May 2024
Viewed by 93
Abstract
Federated learning, as a distributed machine learning framework, aims to protect data privacy while addressing the issue of data silos by collaboratively training models across multiple clients. However, a significant challenge to federated learning arises from the non-independent and identically distributed (non-iid) nature [...] Read more.
Federated learning, as a distributed machine learning framework, aims to protect data privacy while addressing the issue of data silos by collaboratively training models across multiple clients. However, a significant challenge to federated learning arises from the non-independent and identically distributed (non-iid) nature of data across different clients. non-iid data can lead to inconsistencies between the minimal loss experienced by individual clients and the global loss observed after the central server aggregates the local models, affecting the model’s convergence speed and generalization capability. To address this challenge, we propose a novel federated learning algorithm based on update bias (FedUB). Unlike traditional federated learning approaches such as FedAvg and FedProx, which independently update model parameters on each client before direct aggregation to form a global model, the FedUB algorithm incorporates an update bias in the loss function of local models—specifically, the difference between each round’s local model updates and the global model updates. This design aims to reduce discrepancies between local and global updates, thus aligning the parameters of locally updated models more closely with those of the globally aggregated model, thereby mitigating the fundamental conflict between local and global optima. Additionally, during the aggregation phase at the server side, we introduce a metric called the bias metric, which assesses the similarity between each client’s local model and the global model. This metric adaptively sets the weight of each client during aggregation after each training round to achieve a better global model. Extensive experiments conducted on multiple datasets have confirmed the effectiveness of the FedUB algorithm. The results indicate that FedUB generally outperforms methods such as FedDC, FedDyn, and Scaffold, especially in scenarios involving partial client participation and non-iid data distributions. It demonstrates superior performance and faster convergence in tasks such as image classification. Full article
(This article belongs to the Special Issue Federated Learning Strategies for Machine Learning)
Show Figures

Figure 1

Back to TopTop