Deepfakes, Fake News and Multimedia Manipulation from Generation to Detection (Volume II)

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Multimedia Systems and Applications".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 2283

Special Issue Editor


E-Mail Website
Guest Editor
Department of Network and Computer Security, State University of New York Polytechnic Institute, Utica, NY 13502, USA
Interests: machine learning and computer vision with applications to cybersecurity, biometrics, affect recognition, image and video processing, and perceptual-based audiovisual multimedia quality assessmentperceptual-based audiovisual multimedia quality assessment; cybersecurity
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine-learning-based techniques are being utilized to generate hyper-realistic manipulated facial multimedia content known as DeepFakes. While such technologies have positive potentials for use in entertainment applications, the malevolent use of this technology can harm citizens and society as a whole by facilitating the construction of indecent content, the spread of fake news to subvert elections or undermine politics, bullying, and the amelioration of social engineering to perpetrate financial fraud. In fact, it has been shown that manipulated facial multimedia content can deceive not only humans but also automated face-recognition-based biometric systems. The advent of advanced hardware, powerful smart devices, user-friendly apps (e.g., FaceApp and ZAO), and open-source ML codes (e.g., Generative Adversarial Networks) has enabled even non-experts to effortlessly create manipulated facial multimedia contents. In principle, face manipulation involves swapping two faces, modifying facial attributes (e.g., age and gender), morphing two different faces into one face, adding imperceptible perturbations (i.e., adversarial examples), synthetically generating faces, or animating/recreating facial expressions in face images/videos.

Topics of interest in this Special Issue include but are not limited to:

  • The generation of DeepFakes, face morphing, manipulation, and adversarial attacks;
  • The generation of synthetic faces using ML/AI techniques, e.g., GANs;
  • The detection of DeepFakes, face morphing, manipulation, and adversarial attacks, including generalizable systems;
  • The generation and detection of audio DeepFakes;
  • Novel datasets and experimental protocols to facilitate research in DeepFakes and face manipulations;
  • The formulation and extraction of DeepFake devices, platforms, and software/app fingerprints;
  • Face recognition systems (and humans) against DeepFakes, face morphing, manipulation, and adversarial attacks, including their vulnerabilities to digital face manipulations;
  • DeepFakes in the courtroom and on copyright law.

Dr. Zahid Akhtar
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deepfakes
  • digital face manipulations
  • digital forensics
  • fake news
  • multimedia manipulations
  • generative AI
  • security and privacy
  • information authenticity
  • face morphing attack
  • biometrics

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 429 KiB  
Article
Media Forensic Considerations of the Usage of Artificial Intelligence Using the Example of DeepFake Detection
by Dennis Siegel, Christian Kraetzer, Stefan Seidlitz and Jana Dittmann
J. Imaging 2024, 10(2), 46; https://doi.org/10.3390/jimaging10020046 - 09 Feb 2024
Cited by 1 | Viewed by 1792
Abstract
In recent discussions in the European Parliament, the need for regulations for so-called high-risk artificial intelligence (AI) systems was identified, which are currently codified in the upcoming EU Artificial Intelligence Act (AIA) and approved by the European Parliament. The AIA is the first [...] Read more.
In recent discussions in the European Parliament, the need for regulations for so-called high-risk artificial intelligence (AI) systems was identified, which are currently codified in the upcoming EU Artificial Intelligence Act (AIA) and approved by the European Parliament. The AIA is the first document to be turned into European Law. This initiative focuses on turning AI systems in decision support systems (human-in-the-loop and human-in-command), where the human operator remains in control of the system. While this supposedly solves accountability issues, it includes, on one hand, the necessary human–computer interaction as a potential new source of errors; on the other hand, it is potentially a very effective approach for decision interpretation and verification. This paper discusses the necessary requirements for high-risk AI systems once the AIA comes into force. Particular attention is paid to the opportunities and limitations that result from the decision support system and increasing the explainability of the system. This is illustrated using the example of the media forensic task of DeepFake detection. Full article
Show Figures

Figure 1

Back to TopTop