Back
Oxford Internet Institute: Computational Propaganda
webcomprop.oii.ox.ac.uk·comprop.oii.ox.ac.uk/
This research group is a leading academic source on AI-enabled influence operations and political manipulation; relevant to AI safety discussions around misuse, information integrity, and societal-scale harms from deployed AI systems.
Metadata
Importance: 52/100homepage
Summary
The Oxford Internet Institute's Computational Propaganda project investigates how digital technologies, bots, and algorithmic systems are weaponized to manipulate public opinion and undermine democratic processes. Using computational and social science methods, the project analyzes disinformation campaigns, social media manipulation, and platform dynamics across multiple countries. Their research informs policy responses to coordinated inauthentic behavior and influence operations.
Key Points
- •Tracks and analyzes state-sponsored and automated disinformation campaigns across social media platforms globally
- •Produces country-by-country reports on organized social media manipulation and computational propaganda tactics
- •Examines how algorithms and AI tools are leveraged to amplify misinformation and political messaging at scale
- •Bridges academic research and policy, providing evidence-based recommendations for platform governance and election integrity
- •Relevant to AI safety as AI-powered influence operations represent a near-term misuse risk with significant societal consequences
Review
The Computational Propaganda project at the Oxford Internet Institute represents a critical interdisciplinary approach to understanding how digital technologies can be weaponized to distort public discourse and undermine democratic institutions. Led by Professor Philip Howard, the research spans multiple domains including sociology, information studies, and international affairs, with a focus on examining how algorithms, automation, and strategic communication techniques can be used to spread misleading information. The project's methodology combines computational analysis, qualitative research, and big data approaches to map and understand the complex ecosystem of online propaganda. By investigating topics like anti-vaccine communities, political misinformation, and coordinated influence campaigns, the researchers provide nuanced insights into how digital platforms can be manipulated. Their work has significant implications for AI safety, highlighting the potential risks of computational systems being used to spread harmful narratives and demonstrating the need for robust governance frameworks to mitigate these threats.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Authoritarian Tools | Risk | 91.0 |
| AI-Induced Cyber Psychosis | Risk | 37.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202619 KB
OII | Programme on Democracy and Technology
Skip down to main content
Mobile menu button
Mobile search button
Search for :
Project Contents
Overview
Major Areas of Research
Key Information
Participants
Videos
Upcoming Project Events
Project News
Project Press Coverage
Related Projects
Related Topics
Overview
The Programme on Technology and Democracy investigates the use of algorithms, automation, and computational propaganda in public life. This programme of activity is backed by a team of social and information scientists eager to protect democracy and put social data science to work for civic engagement. We are conducting international fieldwork with the political consultants and computer experts who are commissioned to activate or catch information operations. We are building original databases of incidents and accounts involved in such activities, and we use our knowledge to make better tools for detecting and ending interference with democracy. We engage in “real-time” social and information science, actively disseminating our findings to journalists, industry, and foreign policy experts. Our network of experts helps civil society, industry, government, and other independent researchers develop a better understanding of the role of technology in public life.
Project website
Key Information
Project dates:
January 2012 -
August 2025
Contact:
Philip Howard
Major Areas of Research
Covid-19
The global pandemic has brought to the fore the pressing problems caused by disinformation, leading many scholars to study the “infodemic” that is accompanying and exacerbating the public health crisis. Disinformation about the virus has already led to serious health repercussions in countries around the world. Our research on COVID-related disinformation looks at the prominence of stories by junk news outlets and state-backed media outlets on social media. ComProp researchers are also investigating the systems that help these junk news stories to succeed: from the online advertising ecosystem to incentives on social media platforms.
Elections
The tools of computational propaganda are often deployed around elections, as various actors seek to sway public opinion through legitimate and illegitimate means. Our research on disinformation and elections looks at information-sharing on social media by members of the electorate, foreig
... (truncated, 19 KB total)Resource ID:
6482a9b515875f49 | Stable ID: sid_KhMQ1k9HCo