Back
Humans Are More Likely to Believe Messages from AI (Stanford HAI)
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Stanford HAI
Relevant to AI safety discussions around misuse, disinformation, and the social risks of deploying persuasive AI systems; underscores why transparency and content labeling policies matter.
Metadata
Importance: 55/100news articlenews
Summary
A Stanford HAI study examines how people respond to messages they believe are generated by AI versus humans, finding that individuals tend to place higher credibility or trust in AI-generated content. This has significant implications for misinformation, persuasion, and the societal risks of AI-generated communication at scale.
Key Points
- •People are more likely to believe or trust messages when they are told the source is AI rather than human.
- •This credibility bias could be exploited for disinformation campaigns or manipulation at scale.
- •The finding raises concerns about AI's potential to amplify persuasion and undermine critical evaluation of information.
- •Results suggest current public mental models of AI may inadvertently confer unwarranted authority to AI outputs.
- •Has policy implications for AI-generated content labeling, disclosure requirements, and media literacy.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Capability Threshold Model | Analysis | 72.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20260 KB
404
Resource ID:
9fc081c471fb3bb0 | Stable ID: sid_PdkpIq8GTd