Bail et al. 2018 - Exposure to Opposing Views on Social Media Can Increase Political Polarization
webAuthors
Credibility Rating
Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.
Rating inherited from publication venue: PNAS
Relevant to AI safety discussions around recommendation algorithms, epistemic bubbles, and how AI-mediated information environments may reinforce rather than correct political polarization, with implications for alignment and governance of social AI systems.
Paper Details
Metadata
Summary
This PNAS study by Bail et al. experimentally tested whether exposure to opposing political views on social media reduces polarization. Contrary to the 'echo chamber' correction hypothesis, they found that Republicans who followed a liberal bot became more conservative, and Democrats showed similar but weaker effects, suggesting algorithmic exposure to opposing views can backfire.
Key Points
- •Randomized experiment had ~1,200 Twitter users follow bots retweeting opposing political party content for one month
- •Republicans exposed to liberal content became significantly more conservative, contradicting echo-chamber mitigation assumptions
- •Results suggest 'backfire effects' where cross-cutting exposure reinforces rather than moderates existing political beliefs
- •Challenges common policy prescriptions that increasing exposure to diverse views will reduce polarization
- •Has implications for AI content recommendation systems and how algorithmic curation shapes political epistemics
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Sycophancy Feedback Loop Model | Analysis | 53.0 |
| AI Preference Manipulation | Risk | 55.0 |
Cached Content Preview
# Exposure to opposing views on social media can increase political polarization Authors: Christopher A. Bail, Lisa P. Argyle, Taylor W. Brown, John P. Bumpus, Haohan Chen, M. B. Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, Alexander Volfovsky Journal: Proceedings of the National Academy of Sciences Published: 2018-09-11 DOI: 10.1073/pnas.1804840115 ## Abstract Significance Social media sites are often blamed for exacerbating political polarization by creating “echo chambers” that prevent people from being exposed to information that contradicts their preexisting beliefs. We conducted a field experiment that offered a large group of Democrats and Republicans financial compensation to follow bots that retweeted messages by elected officials and opinion leaders with opposing political views. Republican participants expressed substantially more conservative views after following a liberal Twitter bot, whereas Democrats’ attitudes became slightly more liberal after following a conservative Twitter bot—although this effect was not statistically significant. Despite several limitations, this study has important implications for the emerging field of computational social science and ongoing efforts to reduce political polarization online.
23a9c979fe23842a | Stable ID: sid_228X0sq7h3