Commemorative Special Issue: Adversarial and Federated Machine Learning: State of the Art and New Perspectives

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: closed (15 December 2022) | Viewed by 30128

Special Issue Editor


E-Mail Website
Guest Editor
School of Industrial and Systems Engineering, The University of Oklahoma, Norman, OK 73019, USA
Interests: operations research/management science; mathematical programming; interior point methods; multiobjective optimization; control theory; computational and algebraic geometry; artificial neural networks; kernel methods; evolutionary programming; global optimization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In 2022, we will be celebrating ten years of research on adversarial machine learning. In 2012, Battista Biggio and others demonstrated the first gradient-based attacks on machine learning models. More recently, federated learning (FL), a machine learning setting where many clients collaboratively train a model through a central server while keeping the training data decentralized, was developed. It can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning. This area has received significant interest recently, both from research and applied perspectives. However, adversarial attacks pose a serious threat to the success of FL in real-world problems. Hence, advanced techniques in this area have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.

This Commemorative Special Issue welcomes the submission of papers based on original research about adversarial and federated machine learning. Historical reviews, as well as perspective analyses for the future in this field of research, will also be taken into consideration.

Prof. Dr. Theodore B. Trafalis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • adversarial machine learning
  • federated machine learning
  • data privacy

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 2415 KiB  
Article
Defending against FakeBob Adversarial Attacks in Speaker Verification Systems with Noise-Adding
by Zesheng Chen, Li-Chi Chang, Chao Chen, Guoping Wang and Zhuming Bi
Algorithms 2022, 15(8), 293; https://doi.org/10.3390/a15080293 - 17 Aug 2022
Cited by 2 | Viewed by 1700
Abstract
Speaker verification systems use human voices as an important biometric to identify legitimate users, thus adding a security layer to voice-controlled Internet-of-things smart homes against illegal access. Recent studies have demonstrated that speaker verification systems are vulnerable to adversarial attacks such as FakeBob. [...] Read more.
Speaker verification systems use human voices as an important biometric to identify legitimate users, thus adding a security layer to voice-controlled Internet-of-things smart homes against illegal access. Recent studies have demonstrated that speaker verification systems are vulnerable to adversarial attacks such as FakeBob. The goal of this work is to design and implement a simple and light-weight defense system that is effective against FakeBob. We specifically study two opposite pre-processing operations on input audios in speak verification systems: denoising that attempts to remove or reduce perturbations and noise-adding that adds small noise to an input audio. Through experiments, we demonstrate that both methods are able to weaken the ability of FakeBob attacks significantly, with noise-adding achieving even better performance than denoising. Specifically, with denoising, the targeted attack success rate of FakeBob attacks can be reduced from 100% to 56.05% in GMM speaker verification systems, and from 95% to only 38.63% in i-vector speaker verification systems, respectively. With noise adding, those numbers can be further lowered down to 5.20% and 0.50%, respectively. As a proactive measure, we study several possible adaptive FakeBob attacks against the noise-adding method. Experiment results demonstrate that noise-adding can still provide a considerable level of protection against these countermeasures. Full article
Show Figures

Figure 1

14 pages, 1141 KiB  
Article
Communication-Efficient Vertical Federated Learning
by Afsana Khan, Marijn ten Thij and Anna Wilbik
Algorithms 2022, 15(8), 273; https://doi.org/10.3390/a15080273 - 4 Aug 2022
Cited by 9 | Viewed by 3704
Abstract
Federated learning (FL) is a privacy-preserving distributed learning approach that allows multiple parties to jointly build machine learning models without disclosing sensitive data. Although FL has solved the problem of collaboration without compromising privacy, it has a significant communication overhead due to the [...] Read more.
Federated learning (FL) is a privacy-preserving distributed learning approach that allows multiple parties to jointly build machine learning models without disclosing sensitive data. Although FL has solved the problem of collaboration without compromising privacy, it has a significant communication overhead due to the repetitive updating of models during training. Several studies have proposed communication-efficient FL approaches to address this issue, but adequate solutions are still lacking in cases where parties must deal with different data features, also referred to as vertical federated learning (VFL). In this paper, we propose a communication-efficient approach for VFL that compresses the local data of clients, and then aggregates the compressed data from all clients to build an ML model. Since local data are shared in compressed form, the privacy of these data is preserved. Experiments on publicly available benchmark datasets using our proposed method show that the final model obtained by aggregation of compressed data from clients outperforms the performance of the local models of the clients. Full article
Show Figures

Figure 1

Review

Jump to: Research

26 pages, 790 KiB  
Review
Comparative Review of the Intrusion Detection Systems Based on Federated Learning: Advantages and Open Challenges
by Elena Fedorchenko, Evgenia Novikova and Anton Shulepov
Algorithms 2022, 15(7), 247; https://doi.org/10.3390/a15070247 - 15 Jul 2022
Cited by 13 | Viewed by 4558
Abstract
In order to provide an accurate and timely response to different types of the attacks, intrusion and anomaly detection systems collect and analyze a lot of data that may include personal and other sensitive data. These systems could be considered a source of [...] Read more.
In order to provide an accurate and timely response to different types of the attacks, intrusion and anomaly detection systems collect and analyze a lot of data that may include personal and other sensitive data. These systems could be considered a source of privacy-aware risks. Application of the federated learning paradigm for training attack and anomaly detection models may significantly decrease such risks as the data generated locally are not transferred to any party, and training is performed mainly locally on data sources. Another benefit of the usage of federated learning for intrusion detection is its ability to support collaboration between entities that could not share their dataset for confidential or other reasons. While this approach is able to overcome the aforementioned challenges it is rather new and not well-researched. The challenges and research questions appear while using it to implement analytical systems. In this paper, the authors review existing solutions for intrusion and anomaly detection based on the federated learning, and study their advantages as well as open challenges still facing them. The paper analyzes the architecture of the proposed intrusion detection systems and the approaches used to model data partition across the clients. The paper ends with discussion and formulation of the open challenges. Full article
Show Figures

Figure 1

20 pages, 2246 KiB  
Review
A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions
by Zaynab Almutairi and Hebah Elgibreen
Algorithms 2022, 15(5), 155; https://doi.org/10.3390/a15050155 - 4 May 2022
Cited by 25 | Viewed by 18896
Abstract
A number of AI-generated tools are used today to clone human voices, leading to a new technology known as Audio Deepfakes (ADs). Despite being introduced to enhance human lives as audiobooks, ADs have been used to disrupt public safety. ADs have thus recently [...] Read more.
A number of AI-generated tools are used today to clone human voices, leading to a new technology known as Audio Deepfakes (ADs). Despite being introduced to enhance human lives as audiobooks, ADs have been used to disrupt public safety. ADs have thus recently come to the attention of researchers, with Machine Learning (ML) and Deep Learning (DL) methods being developed to detect them. In this article, a review of existing AD detection methods was conducted, along with a comparative description of the available faked audio datasets. The article introduces types of AD attacks and then outlines and analyzes the detection methods and datasets for imitation- and synthetic-based Deepfakes. To the best of the authors’ knowledge, this is the first review targeting imitated and synthetically generated audio detection methods. The similarities and differences of AD detection methods are summarized by providing a quantitative comparison that finds that the method type affects the performance more than the audio features themselves, in which a substantial tradeoff between the accuracy and scalability exists. Moreover, at the end of this article, the potential research directions and challenges of Deepfake detection methods are discussed to discover that, even though AD detection is an active area of research, further research is still needed to address the existing gaps. This article can be a starting point for researchers to understand the current state of the AD literature and investigate more robust detection models that can detect fakeness even if the target audio contains accented voices or real-world noises. Full article
Show Figures

Figure 1

Back to TopTop