A structured index/overview of AI governance approaches across jurisdictions, compute governance, international coordination, and industry self-reg...
This is a stub overview page that lists four policy/governance topic areas (RSPs, model specs, evaluation governance, pause/moratorium) with one-li...
Covers AI chip governance supply chain frameworks including U.S. export controls (EAR, FDPR), hardware-enabled governance proposals, key chokepoint...
This article argues that government capacity to implement AI policy is critically lagging behind AI development, creating an existential risk throu...
TRAIGA represents a state-level AI regulation focused on intent-based liability for harmful AI practices rather than comprehensive safety requireme...
RAND analysis identifies attestation-based licensing as most feasible hardware-enabled governance mechanism with 5-10 year timeline, while 100,000+...
The OpenAI Foundation holds Class N shares giving it exclusive power to appoint/remove all OpenAI Group PBC board members. However, 7 of 8 Foundati...
Comprehensive analysis of AI governance mechanisms estimating 30-50% probability of meaningful regulation by 2027 and 5-25% x-risk reduction potent...
This is a comprehensive overview of U.S. AI chip export controls policy, documenting the evolution from blanket restrictions to case-by-case licens...
Comprehensive analysis of coordination mechanisms for AI safety showing racing dynamics could compress safety timelines by 2-5 years, with \$500M+ ...
Comprehensive comparison of government regulation versus industry self-governance for AI, documenting that US federal AI regulations doubled to 59 ...
This worldview argues governance/coordination is the bottleneck for AI safety (not just technical solutions), estimating 10-30% P(doom) by 2100. Ev...
GovAI is an AI policy research organization with ~40-45 staff, funded primarily by Coefficient Giving (\$1.8M+ in 2023-2024), that has trained 100+...
Surveys US legal authority (DPA, IEEPA, CLOUD Act, FISA 702) over \$700B+ in commercial AI infrastructure concentrated in 5-6 companies, concluding...
Overview of national AI Safety Institutes (UK, US, and 11+ countries as of 2026) and intergovernmental bodies, covering budgets, mandates, and key ...
Comprehensive overview of US government software and AI workforce capacity, covering the civic tech ecosystem (USDS, 18F, Code for America), DOGE-e...
Pahlka's 2023 book argues government digital failures stem from institutional culture separating policy from implementation, creating a 'cascade of...
The Future Society is an international nonprofit founded in 2014 at Harvard Kennedy School that works on AI governance across the UN, OECD, EU, and...
Analyzes model registries as foundational governance infrastructure across US (≥10^26 FLOP threshold), EU (≥10^25 FLOP), and state-level implementa...
Analysis of government AI Safety Institutes finding they've achieved rapid institutional growth (UK: 0→100+ staff in 18 months) and secured pre-dep...
ARIA is a UK government R&D agency whose Safeguarded AI Programme (£59M, led by davidad with Yoshua Bengio as Scientific Director) represents the l...
This article synthesizes the relationship between political stability and AI safety across military, governance, and public trust dimensions, ident...
Comprehensive analysis of AI governance policy effectiveness finds compute thresholds and export controls achieve 60-75% compliance while voluntary...
Carnegie's AI program researches how AI reshapes global governance, geopolitics, and democratic institutions. Operating through offices in Washingt...
The Brookings AIET Initiative is one of the most-cited think tank programs on AI policy in Washington. Part of the Governance Studies program, it p...
CDT is one of the oldest and most established digital rights organizations engaging on AI policy, founded in 1994. Its AI Governance Lab focuses on...
FlexHEG is a nascent but technically serious proposal to embed tamper-resistant governance processors into AI accelerators, enabling cryptographica...
The Centre for Long-Term Resilience is a UK-based think tank that has demonstrated concrete policy influence on AI and biosecurity risks, including...
Comprehensive analysis of international AI compute governance finds 10-25% chance of meaningful regimes by 2035, but potential for 30-60% reduction...
Curated editorial overview of 14 near-term AI risks organized by urgency across governance, misuse, epistemic, and technical domains. Includes a qu...
Economic model analyzing AI safety research returns, recommending 3-10x funding increases from current ~\$500M/year to \$2-5B, with highest margina...
Identifies 35 high-leverage uncertainties in AI risk across compute (scaling breakdown at 10^26-10^30 FLOP), governance (10% P(US-China treaty by 2...
A comprehensive technical taxonomy of hardware-based AI verification mechanisms—location attestation, TEEs, compute metering, interconnect limits, ...
This page synthesizes post-FTX critiques of EA's epistemic and governance failures, identifying interlocking problems including donor hero-worship,...
Strategic framework analyzing how non-lab actors could respond to frontier AI labs deploying \$100-300B+ pre-TAI. For philanthropies: analysis of p...
This is a high-level overview/index page for an 'AI Uses' factor in a transition model, listing key dimensions (recursive AI, critical infrastructu...
Astralis Foundation is a Swedish philanthropic organization focused on AI safety and governance.
Comprehensive biographical entry on Robin Hanson covering his contributions to prediction markets, futarchy governance, and skeptical AI safety pos...
International summits convening governments and AI labs to address AI safety
A competent reference entry on Carnegie Endowment for International Peace covering its AI governance work, relationship to the AI safety community,...
BIS is a high-relevance regulatory actor for AI governance due to its semiconductor export controls and emerging AI diffusion rules; this article p...
This wiki page synthesizes elite theory, political economy, and AI governance into an analytical framework for understanding coordination failures ...
GPAI represents the first major multilateral AI governance initiative but operates as a non-binding policy laboratory with limited enforcement powe...
A well-structured retrospective on the FTX collapse identifying six major pre-collapse warning signs (FTT overreliance, fund commingling, governanc...
Root factor measuring humanity's collective ability to navigate AI transition through governance, epistemics, and adaptability.
Comprehensive analysis of how AI systems could capture institutional decision-making across healthcare, criminal justice, hiring, and governance th...
Rethink Priorities is a research organization founded in 2018 that grew from 2 to ~130 people by 2022, conducting evidence-based analysis across an...
The Frontier Model Forum represents the AI industry's primary self-governance initiative for frontier AI safety, establishing frameworks and fundin...
Comprehensive biographical profile of Helen Toner documenting her career from EA Melbourne founder to CSET Interim Executive Director, with detaile...
A competent overview of the Ford Foundation's history and positioning relative to AI governance, but the foundation's actual AI safety relevance is...