Harvard Kennedy School Misinformation Review article
webA contrarian but evidence-grounded perspective useful for AI safety researchers and policymakers evaluating the actual risks of AI-generated misinformation, countering common assumptions in AI governance discussions about information integrity.
Metadata
Summary
This Harvard Kennedy School Misinformation Review article argues that fears about generative AI dramatically worsening the misinformation landscape are exaggerated, drawing on empirical evidence about how misinformation actually spreads and is consumed. The authors contend that psychological and sociological factors limiting misinformation uptake pre-AI remain relevant, and that demand-side constraints on belief change are often underappreciated. The piece offers a counterpoint to alarmist narratives about AI-generated content flooding the information ecosystem.
Key Points
- •Empirical research suggests people are less susceptible to misinformation than commonly assumed, limiting the practical impact of AI-generated false content.
- •The supply of misinformation has always exceeded demand; AI increasing supply does not necessarily translate to increased belief or harm.
- •Existing cognitive and social mechanisms that constrain misinformation spread will likely continue to operate even with AI-generated content.
- •Overblown fears risk misallocating resources and attention toward AI misinformation rather than more pressing, evidence-backed information threats.
- •The article calls for more empirically grounded, measured assessments of AI's actual versus hypothetical impact on public epistemics.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Epistemic Collapse | Risk | 49.0 |
Cached Content Preview
Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown | HKS Misinformation Review
Skip to main content
Article Metrics
55
CrossRef Citations
2352 PDF Downloads
43069 Page Views
Many observers of the current explosion of generative AI worry about its impact on our information environment, with concerns being raised about the increased quantity, quality, and personalization of misinformation. We assess these arguments with evidence from communication studies, cognitive science, and political science. We argue that current concerns about the effects of generative AI on the misinformation landscape are overblown.
By
Felix M. Simon Oxford Internet Institute, University of Oxford, UK
Sacha Altay Department of Political Science, University of Zurich, Switzerland
Hugo Mercier Institut Jean Nicod, Département d’études cognitives, ENS, EHESS, PSL University, CNRS, France
Image by Alan Warburton on Better Images on AI
Introduction
Recent progress in generative AI has led to concerns that it will “trigger the next misinformation nightmare” (Gold & Fisher, 2023), that people “will not be able to know what is true anymore” (Metz, 2023), and that we are facing a “tech-enabled Armageddon” (Scott, 2023).
Generative AI systems are capable of generating new forms of data by applying machine learning to large quantities of training data. This new data can include text (such as Google’s Bard, Meta’s LLaMa, or OpenAI’s ChatGPT), visuals (such as Stable Diffusion or OpenAI’s DALL-E), or audio (such as Microsoft’s VALL-E). The output that can be produced with these systems at great speed and ease for a majority of users is, depending on the instructions, sufficiently sophisticated that humans can perceive it as indistinguishable from human-generated content (Groh et al., 2022).
According to various voices, including some leading AI researchers, generative AI will make it easier to create realistic but false or misleading content at scale, with potentially catastrophic outcomes for people’s beliefs and behaviors, the public arena of information, 1 We understand the public arena to be the common but contested mediated space in which different actors exchange information, discuss matters of common concern, and which mediates the relation between different parts of society (Jungherr & Schroeder, 2021a). and democracy. These concerns can be divided in four categories (Table 1).
Argument Explanation of claim Presumed effect Source 1. Increased quantity of misinformation Due to the ease of access and use, generative AIs can be used to create mis-/disinformation at scale at little to no cost to individuals and organized actors Increased quantity of misinformation allows ill-intentioned actors to “floo
... (truncated, 37 KB total)e4d7abe6d2b4ef5d | Stable ID: sid_Ru9zOkLESz