Longterm Wiki
Back

The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth

paper

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
AI Trust Cascade FailureRisk55.0

Cached Content Preview

HTTP 200Fetched Feb 22, 202650 KB
The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth 
 
 
 
 

 
 

 
 
 
 
 The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth

 
 
 Emilio Ferrara 1,2,3,∗ 
 
 
 (
 1 Thomas Lord Department of Computer Science, University of Southern California (USC)
 2 Annenberg School for Communication, University of Southern California (USC), Los Angeles, CA, USA
 3 Information Sciences Institute (ISI), University of Southern California (USC), Marina del Rey, CA, USA
 
 ∗ Correspondence: emiliofe@usc.edu 
 ) 
 
 Abstract

 Generative AI (GenAI) now produces text, images, audio, and video that can be perceptually convincing at scale and at negligible marginal cost. While public debate often frames the associated harms as “deepfakes” or incremental extensions of misinformation and fraud, this view misses a broader socio-technical shift: GenAI enables synthetic realities ; coherent, interactive, and potentially personalized information environments in which content, identity, and social interaction are jointly manufactured and mutually reinforcing. We argue that the most consequential risk is not merely the production of isolated synthetic artifacts, but the progressive erosion of shared epistemic ground and institutional verification practices as synthetic content, synthetic identity, and synthetic interaction become easy to generate and hard to audit. This paper (i) formalizes synthetic reality as a layered stack (content, identity, interaction, institutions), (ii) expands a taxonomy of GenAI harms spanning personal, economic, informational, and socio-technical risks, (iii) articulates the qualitative shifts introduced by GenAI (cost collapse, throughput, customization, micro-segmentation, provenance gaps, and trust erosion), and (iv) synthesizes recent risk realizations (2023–2025) into a compact case bank illustrating how these mechanisms manifest in fraud, elections, harassment, documentation, and supply-chain compromise. We then propose a mitigation stack that treats provenance infrastructure, platform governance, institutional workflow redesign, and public resilience as complementary rather than substitutable, and outline a research agenda focused on measuring epistemic security . We conclude with the Generative AI Paradox : as synthetic media becomes ubiquitous, societies may rationally discount digital evidence altogether, raising the cost of truth for everyday life and for democratic and economic institutions.

 
 
 Keywords: artificial intelligence; generative AI; information verification; epistemic security

 
 
 
 
 
 
 
 
 
 Figure 1: (Top Left) In January 2024, the r/StableDiffusion community on Reddit demonstrated a proof-of-concept workflow to synthetically generate personas and (Bottom Left) proofs of identity. (Top Right) GenAI can produce lifelike depictions of never-occurred events (MJv5 prompt: 

... (truncated, 50 KB total)
Resource ID: b1d1daed71579f2b | Stable ID: MmJhNTE3MD