Handbook of DeepFakes and Face Manipulations - Call for Chapters

Call for Book Chapter 2021


Engineering & Computer Science (General)





Title: Handbook of DeepFakes and Face Manipulations - Call for Chapters
Editors: Zahid Akhtar1, Roy Shilkrot2, and Abdenour Hadid3
1 State University of New York Polytechnic Institute, USA
2 Stony Brook University, USA
3 Polytechnic University of Hauts-de-France, France
Machine learning-based techniques are being utilized to generate hyper-realistic manipulated facial multimedia contents, known as DeepFakes. While such technologies have positive potentials for entertainment applications, malevolent use can harm citizens and the society at large by constructing indecent content, spreading fake news to subvert elections or undermine politics, bullying people, and ameliorating social engineering to perpetrate financial frauds. In fact, it has been shown that manipulated facial multimedia contents not only can deceive humans but also automated face recognition-based biometric systems. The advent of advanced hardware, powerful smart devices, user-friendly apps (e.g., FaceApp, ZAO), and open-source ML codes (e.g., Generative Adversarial Networks) have enabled even non-experts to effortlessly create manipulated facial multimedia contents. In principle, face manipulation involves swapping two faces, modifying facial attributes (e.g., age and gender), morphing two different faces into one face, adding imperceptible perturbations (i.e., adversarial examples), synthetically generating faces, or animating/reenacting facial expressions in the face images/videos.
In recent years, a number of articles have been published on the topic of DeepFakes and face manipulation generation, detection, and face recognition under manipulations. This book will provide the first comprehensive account of the state-of-the-art in DeepFakes and face manipulations. The book will include an introductory chapter by the editors that will summarize the state-of-the-art in DeepFakes and face manipulations, followed by individual chapters describing the various works that are being used for such purposes, the methods that have been devised to generate DeepFakes, facial sample editing using audio and video modalities, analysis of face recognition systems’ robustness, and usability of methods.
This handbook on “DeepFakes and Face Manipulations” is expected to be published in the Springer Advances in Computer Vision and Pattern Recognition series in 2021. The editorial team is soliciting chapter contributions from the machine learning (ML), computer vision, biometrics, multimedia forensics, pattern recognition, and artificial intelligence (AI) research communities.
Topics of interest include, but are not limited to:
• Generation of DeepFakes, and face morphing, manipulation and adversarial attacks
• Generation of synthetic faces using ML/AI techniques, e.g., GANs
• Detection of DeepFakes, and face morphing, manipulation and adversarial attacks, including generalizable systems
• Generation and detection of audio DeepFakes
• Novel datasets and experimental protocols to facilitate research in DeepFakes and face manipulations
• Formulation and extraction of DeepFakes device, platform, software/app fingerprints
• Face recognition systems (and humans) against DeepFakes, and face morphing, manipulation and adversarial attacks, including their vulnerabilities to digital face manipulations
• DeepFakes in the courtroom and on copyright law
Originality
Chapter contributions should contain 25-30% novel content compared to earlier published work by the authors.
Timeline
• Expression of interest: 04-15-2021 (tentative: chapter title, and abstract)
• Selection of chapters: 05-30-2021
• Deadline for full chapter submission: 07-30-2021
• Review of chapters: 08-30-2021
• Camera-ready version: 09-31-2021
Prospective authors should express their interest by May 15th 2021 with a concise chapter proposal (title, authors, and abstract) to the editorial team:
Zahid Akhtar (akhtarz@sunypoly.edu)
Roy Shilkrot (roy.shilkrot@stonybrook.edu)
Abdenour Hadid (abdenour.hadid@ieee.org)