Longterm Wiki

Legislation

AI-related legislation, policies, and regulatory frameworks. Includes national and international laws, executive orders, and proposed bills.

Policies
35
With Status
23
Enacted / In Effect
17
Vetoed
1
Scope:
Showing 35 of 35 policies
AI Model Specifications

Model specifications are explicit documents defining AI behavior, now published by all major frontier labs (Anthropic, OpenAI, Google, Meta) as of 2025. While they improve transparency and enable external scrutiny, they face a fundamental spec-reality gap—specifications don't guarantee implementatio

AI Safety Institutes (AISIs)

Government-run institutions dedicated to evaluating frontier AI systems for dangerous capabilities and safety properties. Pioneered by the UK AISI in 2023, with analogues in the US (USAISI), EU, Japan, and others. Play a key role in pre-deployment evaluations and responsible scaling policy thresholds.

UK (2023), US (2024), others planned4
AI Standards Development

International and national organizations that develop technical standards for AI systems, including measurement methodologies, safety requirements, and evaluation protocols. Key bodies include ISO/IEC JTC 1/SC 42, NIST, and IEEE, whose standards increasingly inform government AI regulation.

emerging3
AI Whistleblower Protections

Legal and institutional frameworks for protecting AI researchers and employees who report safety concerns. The bipartisan AI Whistleblower Protection Act (S.1792) introduced May 2025 addresses critical gaps in current law, while EU AI Act Article 87 provides protections from August 2026.

Artificial Intelligence and Data Act (AIDA)

The Artificial Intelligence and Data Act (AIDA) was Canada's proposed federal AI legislation, introduced as Part 3 of Bill C-27 (the Digital Charter Implementation Act, 2022). Despite years of debate and amendment, the bill died on the order paper when Parliament was dissolved in January 2025.

failedNationalCanadian federal governmentJune 16, 20223
Bletchley Declaration

World-first international agreement on AI safety signed by 28 countries at the November 2023 AI Safety Summit, committing to cooperation on frontier AI risks. Follow-up summits in Seoul (May 2024) and Paris (February 2025) expanded commitments.

in-effectInternationalUK government (Rishi Sunak)November 1-2, 2023
California SB 53

California's Transparency in Frontier Artificial Intelligence Act, the first U.S. state law regulating frontier AI models through transparency requirements, safety reporting, and whistleblower protections. Signed September 29, 2025 after Governor Newsom vetoed the more ambitious SB 1047 a year earlier.

enactedStateSenator Scott Wiener2025
China AI Regulatory Framework

China has developed one of the world's most comprehensive AI regulatory frameworks through a series of targeted regulations addressing specific AI applications and risks. Unlike the EU's comprehensive AI Act, China's approach is iterative and sector-specific, with new rules issued as technologies emerge.

in-effectNationalChinese government (CAC, State Council, MIIT)20216
Colorado Artificial Intelligence Act

The Colorado AI Act (SB 24-205) is the first comprehensive AI regulation enacted by a US state. Signed into law on May 17, 2024, it takes effect June 30, 2026. Establishes a risk-based framework regulating AI systems making consequential decisions, with developer documentation requirements and deployer impact assessments.

enactedStateSenator Robert RodriguezMay 20242
Compute Governance

Compute governance uses computational hardware as a lever to regulate AI development. Because advanced AI requires enormous amounts of computing power, and that compute comes from concentrated supply chains, controlling compute provides a tractable way to govern AI before models are built.

emergingInternational20225
Compute Monitoring

Framework analyzing compute monitoring approaches for AI governance, finding that cloud KYC targeting 10^26 FLOP threshold is implementable now via three major providers controlling 60%+ of cloud infrastructure, while hardware-level governance faces 3-5 year development timelines.

Compute Thresholds

Analysis of compute thresholds as regulatory triggers, examining current implementations (EU AI Act at 10^25 FLOP, US EO at 10^26 FLOP), their effectiveness as capability proxies, and core challenges including algorithmic efficiency improvements that may render static thresholds obsolete within 3-5 years.

Council of Europe Framework Convention on Artificial Intelligence

The Council of Europe's AI Framework Convention represents the first legally binding international AI treaty, establishing human rights-focused governance principles across 57+ countries, though it has significant enforcement gaps and excludes national security applications. While historically signi

in-effectInternational2024
EU AI Act

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. Adopted in 2024, it establishes a risk-based approach to AI regulation, with stricter requirements for higher-risk AI systems. Fines up to 7% of global revenue for violations.

in-effectInternationalEuropean CommissionApril 20213
Evals-Based Deployment Gates

Evals-based deployment gates require AI models to pass safety evaluations before deployment or capability scaling. The EU AI Act mandates conformity assessments for high-risk systems with fines up to EUR 35M or 7% global turnover, while UK AISI has evaluated 30+ frontier models.

in-effectInternational2023
Failed and Stalled AI Proposals

Understanding why AI governance proposals fail is as important as understanding successes. Failed efforts reveal political constraints, industry opposition patterns, and the challenges of regulating rapidly evolving technology.

Federal2
Hardware-Enabled Governance

Technical mechanisms built into AI chips enabling monitoring, access control, and enforcement of AI governance policies. RAND analysis identifies attestation-based licensing as most feasible with 5-10 year timeline, while an estimated 100,000+ export-controlled GPUs were smuggled to China in 2024.

International2024
International AI Safety Summit Series

The International AI Safety Summit series represents the first sustained effort at global coordination on AI safety, bringing together governments, AI companies, civil society, and researchers to address the risks from advanced AI.

in-effectInternationalNovember 20237
International Compute Regimes

Multilateral coordination mechanisms for AI compute governance, exploring pathways from non-binding declarations to comprehensive treaties. Assessment finds 10-25% chance of meaningful regimes by 2035, but potential for 30-60% reduction in racing dynamics if achieved.

International
International Coordination Mechanisms

International coordination on AI safety involves multilateral treaties, bilateral dialogues, and institutional networks to manage AI risks globally. Current efforts include the Council of Europe AI Treaty (17 signatories), the International Network of AI Safety Institutes (11+ members), and the Paris Summit 2025 with 61 signatories.

MAIM (Mutually Assured AI Malfunction)

A deterrence framework proposed by Dan Hendrycks, Eric Schmidt, and Alexandr Wang in their 2025 paper 'Superintelligence Strategy'. MAIM posits that rival states will naturally deter each other from pursuing unilateral AI dominance because destabilizing AI projects can be sabotaged through an escalation ladder from espionage to kinetic strikes. Part of a three-pillar framework including nonproliferation and competitiveness.

3
Model Registries

Centralized databases of frontier AI models that enable governments to track development, enforce safety requirements, and coordinate international oversight, serving as foundational infrastructure for AI governance analogous to drug registries for the FDA.

in-effectInternational2023
New York RAISE Act

State legislation requiring safety protocols, incident reporting, and transparency from developers of frontier AI models. Signed December 2025, effective January 2027, with civil penalties up to $3M enforced by the NY Attorney General.

enactedStateSenator Andrew Gounardes, Assemblymember Alex BoresEarly 2025
NIST AI Risk Management Framework (AI RMF)

The NIST AI Risk Management Framework (AI RMF) is a voluntary guidance document developed by the National Institute of Standards and Technology to help organizations manage risks associated with AI systems.

FederalNISTJanuary 20233
Pause / Moratorium

Proposals to pause or slow frontier AI development until safety is better understood, offering potentially high safety benefits if implemented but facing significant coordination challenges and currently lacking adoption by major AI laboratories.

proposedInternational
Responsible Scaling Policies

Responsible Scaling Policies (RSPs) are voluntary commitments by AI labs to pause scaling when capability or safety thresholds are crossed. As of December 2025, 20 companies have published policies, though SaferAI grades the three major frameworks 1.9-2.2/5 for specificity.

in-effectInternationalSeptember 20233
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was California state legislation that would have required safety testing and liability measures for developers of the most powerful AI models. Passed the legislature with strong majorities (Senate 32-1, Assembly 45-11) but was vetoed by Governor Newsom, who cited concerns about regulating based on model size rather than risk.

vetoedStateSenator Scott WienerFebruary 20246
Seoul Declaration on AI Safety

The Seoul AI Safety Summit (May 21-22, 2024) was the second in a series of international AI safety summits, following the Bletchley Park Summit in November 2023.

in-effectInternationalMay 22, 20242
Singapore Consensus on AI Safety Research Priorities

Consensus document from the 2025 Singapore Conference on AI (SCAI), authored by 88 researchers from 11 countries, organizing AI safety research into a defence-in-depth framework across three areas: Assessment, Development, and Control.

in-effectInternational20252
Texas Responsible AI Governance Act (TRAIGA)

Texas TRAIGA is a state-level AI regulation focused on intent-based liability for harmful AI practices rather than comprehensive safety requirements. Passed with near-unanimous support (House 146-3, Senate unanimous). Creates enforcement mechanisms and a regulatory sandbox but avoids prescriptive technical safety standards.

enactedStateRepresentative Giovanni CapriglioneDecember 2024
US AI Chip Export Controls

The United States has implemented unprecedented export controls on advanced semiconductors and semiconductor manufacturing equipment, primarily targeting China. These controls represent one of the most significant attempts to constrain AI development through hardware governance.

in-effectFederalBureau of Industry and Security (BIS)October 20223
US Executive Order on Safe, Secure, and Trustworthy AI

The Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, signed by President Biden on October 30, 2023, is the most comprehensive US government action on AI to date. It establishes safety requirements for frontier AI systems, mandates government agency actions, and creates oversight mechanisms.

revokedFederalPresident BidenOctober 30, 20235
US Government Authority Over Commercial AI Infrastructure

The US government possesses extensive legal authority to direct, commandeer, or access the $700B+ in commercial AI infrastructure owned by US companies, through the Defense Production Act (priority contracts), CLOUD Act (global data access), FISA 702 (warrantless surveillance of non-US persons), IEEPA (emergency commerce regulation), and executive orders. Historical precedents include WWII industrial mobilization, post-9/11 PRISM surveillance, and COVID-era DPA invocations. Current military-commercial integration (Pentagon contracts with Anthropic, Google, OpenAI, xAI; GenAI.mil serving 3M personnel) creates pre-existing channels for rapid mobilization.

US State AI Legislation Landscape

In the absence of comprehensive federal AI legislation, US states have become laboratories for AI governance. As of 2024, hundreds of AI-related bills have been introduced across all 50 states, with several significant laws enacted.

in-effectState20192
Voluntary AI Safety Commitments

In July 2023, the White House secured voluntary commitments from leading AI companies on safety, security, and trust. These commitments represent the first coordinated industry-wide AI safety pledges, establishing baseline practices for frontier AI development.

in-effectInternationalWhite HouseJuly 20236