Skip to content
Longterm Wiki
Back

Bail et al. 2018 - Exposure to Opposing Views on Social Media Can Increase Political Polarization

web

Authors

Ernst Friedberger·Oskar Bail·Richard Pfeiffer

Credibility Rating

5/5
Gold(5)

Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.

Rating inherited from publication venue: PNAS

Relevant to AI safety discussions around recommendation algorithms, epistemic bubbles, and how AI-mediated information environments may reinforce rather than correct political polarization, with implications for alignment and governance of social AI systems.

Paper Details

Citations
0
Year
1919
Methodology
book

Metadata

Importance: 52/100journal articleprimary source

Summary

This PNAS study by Bail et al. experimentally tested whether exposure to opposing political views on social media reduces polarization. Contrary to the 'echo chamber' correction hypothesis, they found that Republicans who followed a liberal bot became more conservative, and Democrats showed similar but weaker effects, suggesting algorithmic exposure to opposing views can backfire.

Key Points

  • Randomized experiment had ~1,200 Twitter users follow bots retweeting opposing political party content for one month
  • Republicans exposed to liberal content became significantly more conservative, contradicting echo-chamber mitigation assumptions
  • Results suggest 'backfire effects' where cross-cutting exposure reinforces rather than moderates existing political beliefs
  • Challenges common policy prescriptions that increasing exposure to diverse views will reduce polarization
  • Has implications for AI content recommendation systems and how algorithmic curation shapes political epistemics

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Apr 7, 20261 KB
# Exposure to opposing views on social media can increase political polarization
Authors: Christopher A. Bail, Lisa P. Argyle, Taylor W. Brown, John P. Bumpus, Haohan Chen, M. B. Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, Alexander Volfovsky
Journal: Proceedings of the National Academy of Sciences
Published: 2018-09-11
DOI: 10.1073/pnas.1804840115
## Abstract

Significance Social media sites are often blamed for exacerbating political polarization by creating “echo chambers” that prevent people from being exposed to information that contradicts their preexisting beliefs. We conducted a field experiment that offered a large group of Democrats and Republicans financial compensation to follow bots that retweeted messages by elected officials and opinion leaders with opposing political views. Republican participants expressed substantially more conservative views after following a liberal Twitter bot, whereas Democrats’ attitudes became slightly more liberal after following a conservative Twitter bot—although this effect was not statistically significant. Despite several limitations, this study has important implications for the emerging field of computational social science and ongoing efforts to reduce political polarization online.
Resource ID: 23a9c979fe23842a | Stable ID: sid_228X0sq7h3