Skip to content
Longterm Wiki

MetaDAO Futarchy Protocol

active

First functional futarchy governance on mainnet. Participants trade on prediction markets about whether proposals will increase token value, replacing traditional voting. Generates ~$4,825/day in revenue. 11 projects using "Futarchy as a Service."

Organizations

2
MetaDAOFirst DAO implementing functional futarchy governance on mainnet (Solana). Instead of token-holder voting, participants trade on prediction markets about whether proposals will increase token value. Generates ~$4,825/day revenue. Paradigm has invested. Multiple projects using "Futarchy as a Service" including Drift, Jito, Sanctum.
US AI Safety Institute (now CAISI)The US AI Safety Institute (US AISI), renamed to the Center for AI Standards and Innovation (CAISI) in June 2025, is a government agency within the National Institute of Standards and Technology (NIST) established in 2023 to develop standards, evaluations, and guidelines for safe and trustworthy artificial intelligence.

People

2
Archon FungProfessor at Harvard Kennedy School of Government. Expert on democracy, governance, and civic participation. Studies how institutional design affects public engagement and accountability.
Yoshua BengioYoshua Bengio is a pioneer of deep learning who shared the 2018 Turing Award with Geoffrey Hinton and Yann LeCun for their foundational work on neural networks. As Scientific Director of Mila, the Quebec AI Institute, he leads one of the world's largest academic AI research centers. His technical contributions include fundamental work on neural network optimization, recurrent networks, and attention mechanisms. In recent years, Bengio has increasingly focused on AI safety and governance. He was an early signatory of the 2023 Statement on AI Risk and has become a prominent voice arguing that frontier AI development requires more caution and oversight. His concerns span both near-term harms (misinformation, job displacement) and longer-term risks from systems that might become difficult to control. Unlike some AI researchers who dismiss existential risk concerns, Bengio has engaged seriously with these arguments. Bengio's research agenda has evolved to include safety-relevant directions like causal representation learning, which could help AI systems develop more robust and generalizable understanding of the world. He has advocated for international governance mechanisms for AI, including proposals for compute governance and safety standards. His position as one of the founding figures of modern AI gives his safety advocacy significant weight with policymakers and the broader research community.

Related Projects

2
UMA Optimistic OracleDispute resolution mechanism for prediction markets. Assertions assumed true unless challenged. Now using LLMs to propose and dispute data. >99% accuracy. Primary resolution layer for Polymarket.
Democracy Levels FrameworkFramework by Aviv Ovadya and 13 co-authors for evaluating the degree to which decisions in a given domain are made democratically. Provides milestones toward meaningfully democratic AI. Published November 2024 (arxiv 2411.09222).

Related Wiki Pages

Top Related Pages

Policy

Voluntary AI Safety Commitments

Organizations

US AI Safety Institute

Risks

AI Authoritarian ToolsAI-Driven Institutional Decision CaptureCompute ConcentrationAI Development Racing Dynamics

Concepts

AGI TimelineGovernance-Focused Worldview

Approaches

AI-Augmented ForecastingAI EvaluationThird-Party Model Auditing

Analysis

OpenAI Foundation Governance ParadoxUS Government Authority Over Commercial AI InfrastructureUMA Optimistic OracleDemocracy Levels Framework

Historical

International AI Safety Summit Series

Key Debates

Open vs Closed Source AIGovernment Regulation vs Industry Self-Governance

Other

Archon FungYoshua Bengio

Clusters

governance

Tags

futarchyprediction-marketsgovernancegovernance

Quick Links