» Articles » PMID: 36714268

Deepfakes, Fake Barns, and Knowledge from Videos

Overview
Journal Synthese
Date 2023 Jan 30
PMID 36714268
Authors
Affiliations
Soon will be listed here.
Abstract

Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found in traditional cases. Given that barn cases have posed a long-standing challenge for virtue-theoretic accounts of knowledge, I consider whether a similar challenge extends to deepfakes. In doing so, I consider how Duncan Pritchard's recent anti-risk virtue epistemology meets the challenge. While Pritchard's account avoids problems in traditional barn cases, I claim that it leads to local scepticism about knowledge from online videos in the case of deepfakes. I end by considering how two alternative virtue-theoretic approaches might vindicate our epistemic dependence on videos in an increasingly digital world.

Citing Articles

OpenAI's Sora and Google's Veo 2 in Action: A Narrative Review of Artificial Intelligence-driven Video Generation Models Transforming Healthcare.

Temsah M, Nazer R, Altamimi I, Aldekhyyel R, Jamal A, Almansour M Cureus. 2025; 17(1):e77593.

PMID: 39831180 PMC: 11741145. DOI: 10.7759/cureus.77593.

References
1.
Fallis D . The Epistemic Threat of Deepfakes. Philos Technol. 2020; 34(4):623-643. PMC: 7406872. DOI: 10.1007/s13347-020-00419-2. View