» Articles » PMID: 32750816

FakeCatcher: Detection of Synthetic Portrait Videos Using Biological Signals

Overview
Date 2020 Aug 6
PMID 32750816
Citations 17
Authors
Affiliations
Soon will be listed here.
Abstract

The recent proliferation of fake portrait videos poses direct threats on society, law, and privacy [1]. Believing the fake video of a politician, distributing fake pornographic content of celebrities, fabricating impersonated fake videos as evidence in courts are just a few real world consequences of deep fakes. We present a novel approach to detect synthetic content in portrait videos, as a preventive solution for the emerging threat of deep fakes. In other words, we introduce a deep fake detector. We observe that detectors blindly utilizing deep learning are not effective in catching fake content, as generative models produce formidably realistic results. Our key assertion follows that biological signals hidden in portrait videos can be used as an implicit descriptor of authenticity, because they are neither spatially nor temporally preserved in fake content. To prove and exploit this assertion, we first engage several signal transformations for the pairwise separation problem, achieving 99.39% accuracy. Second, we utilize those findings to formulate a generalized classifier for fake content, by analyzing proposed signal transformations and corresponding feature sets. Third, we generate novel signal maps and employ a CNN to improve our traditional classifier for detecting synthetic content. Lastly, we release an "in the wild" dataset of fake portrait videos that we collected as a part of our evaluation process. We evaluate FakeCatcher on several datasets, resulting with 96%, 94.65%, 91.50%, and 91.07% accuracies, on Face Forensics [2], Face Forensics++ [3], CelebDF [4], and on our new Deep Fakes Dataset respectively. In addition, our approach produces a significantly superior detection rate against baselines, and does not depend on the source, generator, or properties of the fake content. We also analyze signals from various facial regions, under image distortions, with varying segment durations, from different generators, against unseen datasets, and under several dimensionality reduction techniques.

Citing Articles

OpenAI's Sora and Google's Veo 2 in Action: A Narrative Review of Artificial Intelligence-driven Video Generation Models Transforming Healthcare.

Temsah M, Nazer R, Altamimi I, Aldekhyyel R, Jamal A, Almansour M Cureus. 2025; 17(1):e77593.

PMID: 39831180 PMC: 11741145. DOI: 10.7759/cureus.77593.


Enhancing practicality and efficiency of deepfake detection.

Balafrej I, Dahmane M Sci Rep. 2024; 14(1):31084.

PMID: 39730641 PMC: 11680869. DOI: 10.1038/s41598-024-82223-y.


SecureVision: Advanced Cybersecurity Deepfake Detection with Big Data Analytics.

Kumar N, Kundu A Sensors (Basel). 2024; 24(19).

PMID: 39409343 PMC: 11478486. DOI: 10.3390/s24196300.


Deepfake: definitions, performance metrics and standards, datasets, and a meta-review.

Altuncu E, Franqueira V, Li S Front Big Data. 2024; 7:1400024.

PMID: 39296632 PMC: 11408348. DOI: 10.3389/fdata.2024.1400024.


Media Forensics Considerations on DeepFake Detection with Hand-Crafted Features.

Siegel D, Kraetzer C, Seidlitz S, Dittmann J J Imaging. 2024; 7(7).

PMID: 39080896 PMC: 8321349. DOI: 10.3390/jimaging7070108.