Security and Privacy in Distributed Machine Learning

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 February 2025 | Viewed by 1489

Special Issue Editors


E-Mail Website
Guest Editor
School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China
Interests: applied cryptography; mobile crowdsourcing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Information Engineering, Minzu University of China, Beijing 100081, China
Interests: artificial intelligence security; federated learning; data security; privacy protection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China
Interests: artificial intelligence security

Special Issue Information

Dear Colleagues,

Decentralized machine learning involves training models on distributed data sources, often without directly sharing the raw data. As the field of machine learning expands and embraces decentralized architectures, ensuring the security of decentralized machine learning becomes crucial. Decentralized machine learning security focuses on developing innovative techniques, algorithms, and frameworks to guarantee the privacy, integrity, and confidentiality of decentralized machine learning systems. It involves developing mechanisms to prevent privacy leakage and unauthorized access to sensitive data during the training process. Ensuring the reliability and trustworthiness of the participants is crucial to prevent adversarial attacks or manipulation of the training process. Additionally, decentralized machine learning security involves addressing resource constraints, optimizing computation and communication overhead, and mitigating the risks associated with system vulnerabilities and attacks.

Distributed Machine Learning Security is an important research area that aims to address the security challenges arising from the distributed nature of machine learning systems. By developing robust privacy-preserving techniques, protecting the integrity of models, and securing communication infrastructure, researchers are working towards enabling the widespread adoption of distributed machine learning in various sensitive domains while ensuring data privacy and model security.

ICA3PP (established in 1995) is a famous, worldwide event that covers many dimensions of parallel algorithms and architectures, encompassing fundamental theoretical approaches, practical experimental projects, and commercial components and systems. The ICA3PP 2024 Workshop on Distributed Machine Learning Security, organized by the City University of Macau, is the 24th conference in this series. With the booming computing demands from every aspect of modern society, parallel processing has become increasingly critical and challenging. This conference provides a forum for academics and practitioners from all over the world to exchange ideas on improving the efficiency, performance, reliability, security, and interoperability of computing systems and applications.

The Special Issue primarily represents a collection of extended versions of selected papers presented at the ICA3PP 2024 Workshop on Distributed Machine Learning Security. However, papers not presented at the ICA3PP are also welcome. The topics of interest include, but are not limited to, the following:

  • Privacy-preserving techniques in decentralized machine learning;
  • Secure multi-party computation for distributed machine learning;
  • Federated learning;
  • Detection and mitigation of model poisoning attacks in decentralized settings;
  • Secure communication protocols for decentralized machine learning;
  • Trustworthiness and reputation management in decentralized machine learning;
  • Anomaly detection and intrusion detection in distributed machine learning;
  • Resource-constrained decentralized machine learning security;
  • Scalability and efficiency of security mechanisms in decentralized machine learning;
  • Secure aggregation methods for distributed machine learning;
  • Cryptographic protocols for secure data sharing in decentralized settings;
  • Adversarial attacks and defenses in decentralized machine learning;
  • Standardization and interoperability in decentralized machine learning security;
  • Real-world applications and case studies of decentralized machine learning security.

Dr. Chuan Zhang
Dr. Xiangyun Tang
Dr. Yajie Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • privacy-preserving techniques
  • secure multi-party computation
  • federated learning
  • poisoning attacks
  • communication protocols
  • secure aggregation methods
  • artificial intelligence security

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1471 KiB  
Article
A Multi-Dimensional Reverse Auction Mechanism for Volatile Federated Learning in the Mobile Edge Computing Systems
by Yiming Hong, Zhaohua Zheng and Zizheng Wang
Electronics 2024, 13(16), 3154; https://doi.org/10.3390/electronics13163154 - 9 Aug 2024
Viewed by 998
Abstract
Federated learning (FL) can break the problem of data silos and allow multiple data owners to collaboratively train shared machine learning models without disclosing local data in mobile edge computing. However, how to incentivize these clients to actively participate in training and ensure [...] Read more.
Federated learning (FL) can break the problem of data silos and allow multiple data owners to collaboratively train shared machine learning models without disclosing local data in mobile edge computing. However, how to incentivize these clients to actively participate in training and ensure efficient convergence and high test accuracy of the model has become an important issue. Traditional methods often use a reverse auction framework but ignore the consideration of client volatility. This paper proposes a multi-dimensional reverse auction mechanism (MRATR) that considers the uncertainty of client training time and reputation. First, we introduce reputation to objectively reflect the data quality and training stability of the client. Then, we transform the goal of maximizing social welfare into an optimization problem, which is proven to be NP-hard. Then, we propose a multi-dimensional auction mechanism MRATR that can find the optimal client selection and task allocation strategy considering clients’ volatility and data quality differences. The computational complexity of this mechanism is polynomial, which can promote the rapid convergence of FL task models while ensuring near-optimal social welfare maximization and achieving high test accuracy. Finally, the effectiveness of this mechanism is verified through simulation experiments. Compared with a series of other mechanisms, the MRATR mechanism has faster convergence speed and higher testing accuracy on both the CIFAR-10 and IMAGE-100 datasets. Full article
(This article belongs to the Special Issue Security and Privacy in Distributed Machine Learning)
Show Figures

Figure 1

Back to TopTop