Back
Chesney & Citron (2019)
webscholarship.law.bu.edu·scholarship.law.bu.edu/faculty_scholarship/640/
A foundational legal scholarship piece frequently cited in AI governance and policy discussions around synthetic media; relevant to AI safety communities concerned with misuse, deception, and the erosion of epistemic trust in the information ecosystem.
Metadata
Importance: 72/100journal articleprimary source
Summary
Chesney and Citron's seminal 2019 law review article examines the emerging threat of deepfake technology to privacy, democratic discourse, and national security. The paper analyzes how AI-generated synthetic media undermines trust in audiovisual evidence and proposes legal and technical countermeasures. It is widely cited as a foundational work in the legal and policy literature on synthetic media.
Key Points
- •Deepfakes pose serious threats to individuals (non-consensual imagery, reputational harm) and to society (political manipulation, disinformation campaigns).
- •Existing legal frameworks (defamation, fraud, evidence law) are poorly equipped to address AI-generated synthetic media at scale.
- •The technology creates a 'liar's dividend' where genuine media can be dismissed as fake, eroding trust in all digital evidence.
- •Authors propose a multi-layered response including platform liability reforms, criminal statutes, and technical authentication standards.
- •National security implications include foreign adversaries using deepfakes for influence operations and destabilizing democratic institutions.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI-Driven Legal Evidence Crisis | Risk | 43.0 |
| AI-Driven Trust Decline | Risk | 55.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20264 KB
"Deep Fakes: A Looming Challenge for Privacy, Democracy, and National S" by Robert Chesney and Danielle K. Citron
-->
Skip to main content
Home
About
FAQ
My Account
< Previous
Next >
Home
>
Faculty Scholarship
>
640
Faculty Scholarship
Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security
Authors
Robert Chesney , University of Texas
Danielle K. Citron , Boston University School of Law Follow
Author granted license
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
Document Type
Article
Publication Date
12-2019
ISSN
0008-1221
Publisher
University of California Berkeley School of Law
Language
en-US
Abstract
Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection. Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors. While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well. Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions.
Recommended Citation
Robert Chesney & Danielle K. Citron,
Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security
,
107
California Law Review
1753
(2019).
... (truncated, 4 KB total)Resource ID:
ad6fe8bb9c2db0d9 | Stable ID: sid_mF2oAkmunm