» Articles » PMID: 32228386

Deep Fakes and Memory Malleability: False Memories in the Service of Fake News

Overview
Journal AJOB Neurosci
Specialty Medical Ethics
Date 2020 Apr 2
PMID 32228386
Citations 3
Authors
Affiliations
Soon will be listed here.
Abstract

Deep fakes have rapidly emerged as one of the most ominous concerns within modern society. The ability to easily and cheaply generate convincing images, audio, and video via artificial intelligence will have repercussions within politics, privacy, law, security, and broadly across all of society. In light of the widespread apprehension, numerous technological efforts aim to develop tools to distinguish between reliable audio/video and the fakes. These tools and strategies will be particularly effective for consumers when their guard is naturally up, for example during election cycles. However, recent research suggests that not only can deep fakes create credible representations of reality, but they can also be employed to create false memories. Memory malleability research has been around for some time, but it relied on doctored photographs or text to generate fraudulent recollections. These recollected but fake memories take advantage of our cognitive miserliness that favors selecting those recalled memories that evoke our preferred weltanschauung. Even responsible consumers can be duped when false but belief-consistent memories, implanted when we are least vigilant can, like a Trojan horse, be later elicited at crucial dates to confirm our pre-determined biases and influence us to accomplish nefarious goals. This paper seeks to understand the process of how such memories are created, and, based on that, proposing ethical and legal guidelines for the legitimate use of fake technologies.

Citing Articles

The public mental representations of deepfake technology: An in-depth qualitative exploration through Quora text data analysis.

Caci B, Giordano G, Alesi M, Gentile A, Agnello C, Lo Presti L PLoS One. 2025; 19(12):e0313605.

PMID: 39775334 PMC: 11684586. DOI: 10.1371/journal.pone.0313605.


On manipulation by emotional AI: UK adults' views and governance implications.

Bakir V, Laffer A, McStay A, Miranda D, Urquhart L Front Sociol. 2024; 9:1339834.

PMID: 38912311 PMC: 11190365. DOI: 10.3389/fsoc.2024.1339834.


Face/Off: Changing the face of movies with deepfakes.

Murphy G, Ching D, Twomey J, Linehan C PLoS One. 2023; 18(7):e0287503.

PMID: 37410765 PMC: 10325052. DOI: 10.1371/journal.pone.0287503.


Digital Resilience Through Training Protocols: Learning To Identify Fake News On Social Media.

Soetekouw L, Angelopoulos S Inf Syst Front. 2022; :1-17.

PMID: 35068998 PMC: 8767033. DOI: 10.1007/s10796-021-10240-7.