44QualityAdequateQuality: 44/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.72.3ImportanceHighImportance: 72.3/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.
Summary
A structured index/overview of AI governance approaches across jurisdictions, compute governance, international coordination, and industry self-regulation as of early 2026, identifying key tensions (speed vs. thoroughness, national vs. international, voluntary vs. mandatory). Functions primarily as a navigation hub with minimal original analysis or sourced claims.
Content3/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Add entity YAML in data/entities/Edit historyEdit historyTracked changes from improve pipeline runs and manual edits.crux edit-log view <id>OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
Tables0/ ~2TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links18/ ~4Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Ext. links0/ ~3Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~2ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:2.5 R:3.5 A:4 C:6.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).
Issues1
StructureNo tables or diagrams - consider adding visual content
AI Governance & Policy (Overview)
Overview
AI governance encompasses the policies, regulations, standards, and coordination mechanisms aimed at managing risks from advanced AI systems. The governance landscape is rapidly evolving, with approaches ranging from national legislation to international treaties to voluntary industry commitments. As of early 2026, no single governance framework has achieved comprehensive coverage of frontier AI risks, but multiple overlapping efforts are creating an increasingly dense regulatory environment.
Legislation and Regulation
Major regulatory frameworks and legislation across jurisdictions:
International:
EU AI Act: The world's first comprehensive AI regulation, adopting a risk-based approach to regulate foundation models and general-purpose AI
Council of Europe Framework Convention on AI: First legally binding international AI treaty, establishing human rights standards
United States:
California SB 1047PolicySafe and Secure Innovation for Frontier Artificial Intelligence Models ActCalifornia's SB 1047 required safety testing, shutdown capabilities, and third-party audits for AI models exceeding 10^26 FLOP or \$100M training cost; it passed the legislature (Assembly 45-11, Se...Quality: 66/100: Pioneering state-level frontier AI safety bill (vetoed but influential)
California SB 53: First US state law regulating frontier AI models through transparency requirements
US Executive Order on AIPolicyUS Executive Order on Safe, Secure, and Trustworthy AIExecutive Order 14110 (Oct 2023) established compute thresholds (10^26 FLOP general, 10^23 biological) and created AISI, but was revoked after 15 months with ~85% completion. The 10^26 threshold wa...Quality: 91/100: Federal executive action on AI safety and security
NIST AI Risk Management FrameworkPolicyNIST AI Risk Management Framework (AI RMF)Comprehensive analysis of NIST AI RMF showing 40-60% Fortune 500 adoption with implementation costs of \$50K-\$1M+ annually, but lacking quantitative evidence of actual risk reduction and inadequat...Quality: 60/100: Voluntary framework for managing AI risks
US State AI LegislationPolicyUS State AI Legislation LandscapeComprehensive tracking of US state AI legislation shows explosive growth from ~40 bills in 2019 to 1,080+ in 2025, with only 11% passage rate but real enforcement beginning (Texas AG settlement, Co...Quality: 70/100: Growing landscape of state-level AI regulation
New York RAISE Act: State legislation requiring safety protocols for frontier AI
Texas TRAIGA: Comprehensive AI governance act signed in 2025
Other jurisdictions:
Canada AIDAPolicyArtificial Intelligence and Data Act (AIDA)Comprehensive analysis of Canada's failed Artificial Intelligence and Data Act (AIDA), which died in Parliament in January 2025 after three years of development. Documents specific failure modes in...Quality: 46/100: Canada's Artificial Intelligence and Data Act
Colorado AI ActPolicyColorado Artificial Intelligence ActColorado's AI Act (SB 205) is the first comprehensive US state AI regulation targeting algorithmic discrimination in employment, housing, and other consequential decisions, with enforcement beginni...Quality: 53/100: State-level AI regulation focused on high-risk systems
China AI RegulationsPolicyChina AI Regulatory FrameworkComprehensive analysis of China's AI regulatory framework covering 5+ major regulations affecting 50,000+ companies, with enforcement focusing on content control and social stability rather than ca...Quality: 57/100: China's evolving approach to AI governance including generative AI rules
Analysis:
Failed and Stalled AI Policy ProposalsPolicyFailed and Stalled AI ProposalsAnalysis of AI policy failures reveals systematic patterns: 150+ federal bills with 0% passage rate, industry lobbying up 141% YoY to 648 companies, and 97% public support for AI safety rules versu...Quality: 63/100: Tracking proposals that did not advance and why
Compute Governance
Technical governance approaches leveraging the physical infrastructure of AI:
AI Chip Export ControlsPolicyUS AI Chip Export ControlsComprehensive empirical analysis finds US chip export controls provide 1-3 year delays on Chinese AI development but face severe enforcement gaps (140,000 GPUs smuggled in 2024, only 1 BIS officer ...Quality: 73/100: US policies restricting advanced AI chip exports, particularly to China
Compute ThresholdsPolicyCompute ThresholdsComprehensive analysis of compute thresholds (EU: 10^25 FLOP, US: 10^26 FLOP) as regulatory triggers for AI governance, documenting that algorithmic efficiency improvements of ~2x every 8-17 months...Quality: 91/100: Using training compute as a measurable threshold for regulatory triggers
Compute MonitoringPolicyCompute MonitoringAnalyzes two compute monitoring approaches: cloud KYC (implementable in 1-2 years, covers ~60% of frontier training via AWS/Azure/Google) and hardware governance (3-5 year timeline). Cloud KYC targ...Quality: 69/100: Approaches to tracking and verifying AI training runs
Hardware-Enabled Governance: Technical mechanisms in AI hardware for monitoring and enforcement
International Compute RegimesPolicyInternational Compute RegimesComprehensive analysis of international AI compute governance finds 10-25% chance of meaningful regimes by 2035, but potential for 30-60% reduction in racing dynamics if achieved. First binding tre...Quality: 67/100: Proposals for international coordination on compute governance
International Coordination
Mechanisms for cross-border cooperation on AI safety:
International AI Safety SummitsPolicyInternational AI Safety Summit SeriesThree international AI safety summits (2023-2025) achieved first formal recognition of catastrophic AI risks from 28+ countries, established 10+ AI Safety Institutes with \$100-400M combined budget...Quality: 63/100: Series of international summits on AI safety starting with Bletchley Park (2023)
Bletchley Declaration: First international agreement on AI safety signed by 28 countries
Seoul DeclarationPolicySeoul Declaration on AI SafetyThe May 2024 Seoul AI Safety Summit achieved voluntary commitments from 16 frontier AI companies (80% of development capacity) and established an 11-nation AI Safety Institute network, with 75% com...Quality: 60/100: Follow-up international commitment on frontier AI safety
International Coordination Mechanisms: Bilateral dialogues, multilateral treaties, and institutional networks
Industry Self-Regulation
Voluntary commitments and industry-led safety frameworks:
Responsible Scaling PoliciesPolicyResponsible Scaling Policies (RSPs)RSPs are voluntary industry frameworks that trigger safety evaluations at capability thresholds, currently covering 60-70% of frontier development across 3-4 major labs. Estimated 10-25% risk reduc...Quality: 64/100: Framework pioneered by Anthropic tying safety requirements to capability levels
Voluntary Industry CommitmentsPolicyVoluntary AI Safety CommitmentsComprehensive empirical analysis of voluntary AI safety commitments showing 53% mean compliance rate across 30 indicators (ranging from 13% for Apple to 83% for OpenAI), with strongest adoption in ...Quality: 91/100: Commitments secured by the Biden administration from major AI labs
Model Registries: Centralized databases for tracking frontier AI models
Governance Assessment
AI Governance and PolicyCruxAI Governance and PolicyComprehensive analysis of AI governance mechanisms estimating 30-50% probability of meaningful regulation by 2027 and 5-25% x-risk reduction potential through coordinated international approaches. ...Quality: 66/100: Broader analysis of governance approaches and their effectiveness
Policy Effectiveness AssessmentAnalysisAI Policy EffectivenessComprehensive analysis of AI governance policy effectiveness finds compute thresholds and export controls achieve 60-75% compliance while voluntary commitments show <30% behavioral change, but only...Quality: 64/100: Evaluating which governance interventions actually reduce risk
Key Tensions
Speed vs. thoroughness: The pace of AI capability development outstrips the pace of legislative and regulatory processes in most jurisdictions.
National vs. international: AI development is global but governance is primarily national, creating coordination challenges and regulatory arbitrage risks.
Voluntary vs. mandatory: Industry self-regulation (RSPs, voluntary commitments) is faster to implement but lacks enforcement mechanisms. Legislation provides enforcement but is slower and harder to update.
Compute governance as bottleneck: Compute is the most governable input to AI development (physical, concentrated, measurable), but effective compute governance requires international coordination that remains elusive.
Compute MonitoringPolicyCompute MonitoringAnalyzes two compute monitoring approaches: cloud KYC (implementable in 1-2 years, covers ~60% of frontier training via AWS/Azure/Google) and hardware governance (3-5 year timeline). Cloud KYC targ...Quality: 69/100US State AI Legislation LandscapePolicyUS State AI Legislation LandscapeComprehensive tracking of US state AI legislation shows explosive growth from ~40 bills in 2019 to 1,080+ in 2025, with only 11% passage rate but real enforcement beginning (Texas AG settlement, Co...Quality: 70/100International Compute RegimesPolicyInternational Compute RegimesComprehensive analysis of international AI compute governance finds 10-25% chance of meaningful regimes by 2035, but potential for 30-60% reduction in racing dynamics if achieved. First binding tre...Quality: 67/100Responsible Scaling Policies (RSPs)PolicyResponsible Scaling Policies (RSPs)RSPs are voluntary industry frameworks that trigger safety evaluations at capability thresholds, currently covering 60-70% of frontier development across 3-4 major labs. Estimated 10-25% risk reduc...Quality: 64/100International AI Safety Summit SeriesPolicyInternational AI Safety Summit SeriesThree international AI safety summits (2023-2025) achieved first formal recognition of catastrophic AI risks from 28+ countries, established 10+ AI Safety Institutes with \$100-400M combined budget...Quality: 63/100Safe and Secure Innovation for Frontier Artificial Intelligence Models ActPolicySafe and Secure Innovation for Frontier Artificial Intelligence Models ActCalifornia's SB 1047 required safety testing, shutdown capabilities, and third-party audits for AI models exceeding 10^26 FLOP or \$100M training cost; it passed the legislature (Assembly 45-11, Se...Quality: 66/100
Analysis
AI Policy EffectivenessAnalysisAI Policy EffectivenessComprehensive analysis of AI governance policy effectiveness finds compute thresholds and export controls achieve 60-75% compliance while voluntary commitments show <30% behavioral change, but only...Quality: 64/100