Carnegie Endowment - Can Democracy Survive the Disruptive Power of AI?
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Carnegie Endowment
Published by the Carnegie Endowment for International Peace in December 2024, this report is relevant to AI governance discussions focused on societal and political risks, complementing more technical AI safety literature with a democratic institutions perspective.
Metadata
Summary
This Carnegie Endowment report examines how AI technologies threaten democratic institutions through disinformation, manipulation of public opinion, and concentration of power. It analyzes the risks AI poses to electoral integrity, civic discourse, and accountability mechanisms, while exploring potential policy responses to safeguard democratic governance.
Key Points
- •AI-generated disinformation and synthetic media can undermine informed democratic participation and erode public trust in institutions.
- •Concentration of AI capabilities in a few powerful actors creates asymmetric influence over political narratives and governance processes.
- •Surveillance and micro-targeting tools powered by AI enable new forms of political manipulation at unprecedented scale.
- •Existing democratic institutions and regulatory frameworks are ill-equipped to respond to the speed and scale of AI disruption.
- •Policy responses must balance enabling beneficial AI applications in governance while mitigating risks to democratic integrity.
Review
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Assisted Deliberation | Approach | 63.0 |
Cached Content Preview
Can Democracy Survive the Disruptive Power of AI? | Carnegie Endowment for International Peace Source: iStock (metamorworks)
Article Carnegie Europe Can Democracy Survive the Disruptive Power of AI?
AI models enable malicious actors to manipulate information and disrupt electoral processes, threatening democracies. Tackling these challenges requires a comprehensive approach that combines technical solutions and societal efforts.
Link Copied By Raluca Csernatoni Published on Dec 18, 2024 Since the recent popularization of powerful generative artificial intelligence (AI) systems, there are growing fears that they will impact and destabilize democracies in unforeseen ways. These emerging technologies, made famous by large language models (LLMs) like OpenAI’s ChatGPT chatbot, refer to algorithms that can produce new content based on the data they have been trained on. They can write text and music, craft realistic images and videos, generate synthetic voices, and manipulate vast amounts of information. While generative AI models hold tremendous potential for innovation and creativity, they also open the door to misuse in various ways for democratic societies. These technologies present significant threats to democracies by enabling malicious actors—from political opponents to foreign adversaries—to manipulate public perceptions, disrupt electoral processes, and amplify misinformation.
With increased use of AI-generated content and a cohort of countries moving toward digital authoritarianism by embracing AI-supercharged mass surveillance, the stakes could not be higher. Beyond generally introducing more complexity into the information environment and allowing the faster creation of higher-quality content by more people, generative AI models have the potential to impact democratic discourse by challenging the integrity of elections and further enabling digital authoritarianism. But this is just one facet of a larger issue: the collision between rapidly advancing AI technologies and the erosion of democratic safeguards. The intersection of digital authoritarianism and AI systems—from simpler AI technologies to the latest state-of-the-art LLMs—empowers autocratic governments both domestically and in their foreign interference tactics, presenting a key challenge for twenty-first-century democracy.
The core of the problem lies in the speed and scale at which AI tools, once deployed or weaponized on social media platforms, can generate misleading content. In doing so, these tools outpace both governmental oversight and society’s ability to manage the consequences. The intersection of generative AI models and foreign interference presents a growing threat to global stability and democratic cohesion. As these systems generate highly persuasive text, they enable states and nonstate actors to propagate disinformation and malicious narratives at scale. Amid the evolution of AI technologies, a comprehensive approach that combines technical solutions and
... (truncated, 22 KB total)087288a8d8338b97 | Stable ID: sid_InPsgUwvXa