Longterm Wiki
Navigation
Updated 2026-03-13HistoryData
Page StatusResponse
Edited today519 words
44QualityAdequate72.3ImportanceHigh
Summary

A structured index/overview of AI governance approaches across jurisdictions, compute governance, international coordination, and industry self-regulation as of early 2026, identifying key tensions (speed vs. thoroughness, national vs. international, voluntary vs. mandatory). Functions primarily as a navigation hub with minimal original analysis or sourced claims.

Content3/13
LLM summaryScheduleEntityEdit historyOverview
Tables0/ ~2Diagrams0Int. links18/ ~4Ext. links0/ ~3Footnotes0/ ~2References0/ ~2Quotes0Accuracy0RatingsN:2.5 R:3.5 A:4 C:6.5
Issues1
StructureNo tables or diagrams - consider adding visual content

AI Governance & Policy (Overview)

Overview

AI governance encompasses the policies, regulations, standards, and coordination mechanisms aimed at managing risks from advanced AI systems. The governance landscape is rapidly evolving, with approaches ranging from national legislation to international treaties to voluntary industry commitments. As of early 2026, no single governance framework has achieved comprehensive coverage of frontier AI risks, but multiple overlapping efforts are creating an increasingly dense regulatory environment.

Legislation and Regulation

Major regulatory frameworks and legislation across jurisdictions:

International:

  • EU AI Act: The world's first comprehensive AI regulation, adopting a risk-based approach to regulate foundation models and general-purpose AI
  • Council of Europe Framework Convention on AI: First legally binding international AI treaty, establishing human rights standards

United States:

  • California SB 1047: Pioneering state-level frontier AI safety bill (vetoed but influential)
  • California SB 53: First US state law regulating frontier AI models through transparency requirements
  • US Executive Order on AI: Federal executive action on AI safety and security
  • NIST AI Risk Management Framework: Voluntary framework for managing AI risks
  • US State AI Legislation: Growing landscape of state-level AI regulation
  • New York RAISE Act: State legislation requiring safety protocols for frontier AI
  • Texas TRAIGA: Comprehensive AI governance act signed in 2025

Other jurisdictions:

  • Canada AIDA: Canada's Artificial Intelligence and Data Act
  • Colorado AI Act: State-level AI regulation focused on high-risk systems
  • China AI Regulations: China's evolving approach to AI governance including generative AI rules

Analysis:

  • Failed and Stalled AI Policy Proposals: Tracking proposals that did not advance and why

Compute Governance

Technical governance approaches leveraging the physical infrastructure of AI:

  • AI Chip Export Controls: US policies restricting advanced AI chip exports, particularly to China
  • Compute Thresholds: Using training compute as a measurable threshold for regulatory triggers
  • Compute Monitoring: Approaches to tracking and verifying AI training runs
  • Hardware-Enabled Governance: Technical mechanisms in AI hardware for monitoring and enforcement
  • International Compute Regimes: Proposals for international coordination on compute governance

International Coordination

Mechanisms for cross-border cooperation on AI safety:

  • International AI Safety Summits: Series of international summits on AI safety starting with Bletchley Park (2023)
  • Bletchley Declaration: First international agreement on AI safety signed by 28 countries
  • Seoul Declaration: Follow-up international commitment on frontier AI safety
  • International Coordination Mechanisms: Bilateral dialogues, multilateral treaties, and institutional networks

Industry Self-Regulation

Voluntary commitments and industry-led safety frameworks:

  • Responsible Scaling Policies: Framework pioneered by Anthropic tying safety requirements to capability levels
  • Voluntary Industry Commitments: Commitments secured by the Biden administration from major AI labs
  • Model Registries: Centralized databases for tracking frontier AI models

Governance Assessment

  • AI Governance and Policy: Broader analysis of governance approaches and their effectiveness
  • Policy Effectiveness Assessment: Evaluating which governance interventions actually reduce risk

Key Tensions

Speed vs. thoroughness: The pace of AI capability development outstrips the pace of legislative and regulatory processes in most jurisdictions.

National vs. international: AI development is global but governance is primarily national, creating coordination challenges and regulatory arbitrage risks.

Voluntary vs. mandatory: Industry self-regulation (RSPs, voluntary commitments) is faster to implement but lacks enforcement mechanisms. Legislation provides enforcement but is slower and harder to update.

Compute governance as bottleneck: Compute is the most governable input to AI development (physical, concentrated, measurable), but effective compute governance requires international coordination that remains elusive.

Related Pages

Top Related Pages

Policy

Compute MonitoringUS State AI Legislation LandscapeInternational Compute RegimesResponsible Scaling Policies (RSPs)International AI Safety Summit SeriesSafe and Secure Innovation for Frontier Artificial Intelligence Models Act

Analysis

AI Policy Effectiveness