Skip to content
Longterm Wiki

Democracy Levels Framework

active

Framework by Aviv Ovadya and 13 co-authors for evaluating the degree to which decisions in a given domain are made democratically. Provides milestones toward meaningfully democratic AI. Published November 2024 (arxiv 2411.09222).

Organizations

3
AI & Democracy FoundationNonprofit designing and testing democratic processes for AI governance and alignment. Led by Aviv Ovadya. Key projects include the Democracy Levels framework (evaluating how democratically decisions are made), Case Law for AI (judicial-inspired approaches to connecting deliberative input into alignment), and Safeguarded AI (ARIA partnership for formal safety specifications through deliberation). 11 staff members.
US AI Safety Institute (now CAISI)The US AI Safety Institute (US AISI), renamed to the Center for AI Standards and Innovation (CAISI) in June 2025, is a government agency within the National Institute of Standards and Technology (NIST) established in 2023 to develop standards, evaluations, and guidelines for safe and trustworthy artificial intelligence.
OpenAIOpenAI is the AI research company that brought large language models into mainstream consciousness through ChatGPT. Founded in December 2015 as a non-profit with the mission to ensure artificial general intelligence benefits all of humanity, OpenAI has undergone dramatic evolution - from non-profit to "capped-profit," from research lab to produc...

People

2
Aviv OvadyaFounder and CEO of the AI & Democracy Foundation. Coined "infocalypse" (2016) to describe AI-generated misinformation crisis. Created the Democracy Levels framework for evaluating democratic AI governance. Leads ARIA-funded "Deliberative AI Specifications and Infrastructure" project. Affiliated with Harvard Berkman Klein Center, Centre for Governance of AI, and newDemocracy Foundation. MIT MEng in CS.
Iason GabrielResearch Scientist (Ethics) at Google DeepMind. Author of "Artificial Intelligence, Values, and Alignment" (Minds and Machines, 2020). Contributed to the Democratic AI experiment (Koster, Balaguer et al., Nature Human Behaviour, 2022) where RL-designed redistribution won majority human votes over expert-designed alternatives.

Related Projects

1
MetaDAO Futarchy ProtocolFirst functional futarchy governance on mainnet. Participants trade on prediction markets about whether proposals will increase token value, replacing traditional voting. Generates ~$4,825/day in revenue. 11 projects using "Futarchy as a Service."

Related Wiki Pages

Top Related Pages

Policy

US Executive Order on Safe, Secure, and Trustworthy AIVoluntary AI Safety Commitments

Approaches

AI Safety CasesAI EvaluationThird-Party Model Auditing

Organizations

US AI Safety InstituteOpenAI

Risks

AI Authoritarian ToolsAI-Driven Institutional Decision CaptureCompute ConcentrationAI Development Racing DynamicsAI-Driven Concentration of Power

Other

Iason Gabriel

Analysis

OpenAI Foundation Governance ParadoxUS Government Authority Over Commercial AI InfrastructureMetaDAO Futarchy Protocol

Concepts

Governance-Focused Worldview

Historical

International AI Safety Summit Series

Key Debates

Open vs Closed Source AIGovernment Regulation vs Industry Self-Governance

Clusters

governance

Sources

Tags

democratic-aiframeworkgovernancegovernance

Quick Links