AI Persuasion: Risks and Implications of AI-Driven Influence
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Stanford HAI
Published by Stanford HAI in 2024, this resource is relevant to AI safety discussions around misuse, influence operations, and the broader challenge of maintaining human epistemic autonomy in the face of increasingly persuasive AI systems.
Metadata
Summary
Stanford HAI examines the growing capabilities of AI systems to persuade and influence human beliefs and behavior, analyzing the risks this poses for democracy, autonomy, and social trust. The resource explores how large language models can craft targeted persuasive content at scale, and considers policy and technical responses to mitigate manipulation risks.
Key Points
- •AI systems are becoming increasingly capable of generating highly persuasive, personalized content that can influence human opinions and decisions at scale.
- •The combination of micro-targeting and LLM fluency raises concerns about AI-enabled disinformation, propaganda, and undermining of epistemic autonomy.
- •Researchers highlight risks to democratic processes, including AI-generated political messaging and synthetic media manipulation.
- •Technical and governance interventions—such as content provenance, watermarking, and platform policies—are discussed as potential mitigations.
- •The piece situates AI persuasion within broader AI safety concerns around deceptive or manipulative AI behavior.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Persuasion and Social Manipulation | Capability | 63.0 |
Cached Content Preview
404
fa779b112eb03198 | Stable ID: ZDI4OWEyZT