AI Governance & Policy (Overview)
Overview
AI governance encompasses the policies, regulations, standards, and coordination mechanisms aimed at managing risks from advanced AI systems. The governance landscape is evolving across multiple dimensions simultaneously: national governments are enacting legislation at varying speeds, international bodies are attempting coordination that has so far produced voluntary rather than binding agreements, and industry is developing internal safety frameworks without external verification requirements.
As of mid-2026, no single governance framework covers all frontier AI risks comprehensively. Multiple overlapping mechanisms—mandatory legislation in some jurisdictions, voluntary industry commitments in others, international declarations with limited enforcement authority—are operating in parallel. The degree to which this proliferation of frameworks produces meaningful risk reduction is subject to active debate among researchers and policymakers, with critics pointing to enforcement gaps, pace mismatches, and verification failures, and proponents arguing that the frameworks establish norms that constrain industry behavior even absent direct enforcement.
Quick Assessment
The following table summarizes major governance mechanisms by type, enforcer, and current status. Qualitative labels are avoided in favor of measurable evidence where available.
| Mechanism | Type | Primary Enforcer | Scope / Status (mid-2026) |
|---|---|---|---|
| EU AI Act (GPAI phase) | Mandatory | EU AI Office (125+ staff)1 | GPAI obligations active Aug 2025; fines begin Aug 20262 |
| EU AI Act (high-risk systems) | Mandatory | 27 national market surveillance authorities | Requirements fully applicable Aug 20262 |
| NIST AI Risk Management Framework | Voluntary | None | Referenced by CFPB, FDA, SEC, FTC, EEOC; no public adoption count3 |
| US Executive Order on AI (Biden) | Executive | Agency-by-agency | Revoked Jan 20, 2025; replacement policies not finalized as of mid-20264 |
| Texas TRAIGA | Mandatory | Texas Attorney General | $10K–$200K per violation; effective Jan 1, 20265 |
| Colorado AI Act | Mandatory | Colorado Attorney General | 60-day cure period before enforcement; effective June 30, 20266 |
| California SB 53 | Mandatory | California Attorney General | Frontier model transparency; signed Sept 20257 |
| Seoul Frontier AI Safety Commitments | Voluntary | None (reputational) | 16 companies signed; no external verification mechanism8 |
| Biden White House Voluntary Commitments | Voluntary | None | 15 companies signed (July–Sept 2023); politically superseded after EO revocation9 |
| Council of Europe AI Convention | Treaty | National courts | 37 signatories; US signed but not ratified; not in force as of 202610 |
| US AI Chip Export Controls | Mandatory | Bureau of Industry and Security | Active enforcement with documented circumvention11 |
| China AI Regulations (Generative AI Measures) | Mandatory | Cyberspace Administration of China (CAC) | Thousands of algorithm filings approved; new labeling rules Sept 202512 |
| Canada AIDA | — | — | Bill died when Parliament prorogued Jan 6, 202513 |
For a systematic evaluation of governance effectiveness with evidence ratings across mechanisms, see the AI Governance Effectiveness Analysis page. For analysis of actors, funding flows, and decision-making authority shaping governance outcomes, see the AI Power and Influence Map page.
How It Works
Regulatory Trigger Mechanisms
AI governance frameworks use different criteria to identify which systems come under their requirements:
Risk-tier classification (EU AI Act model): Systems are classified by use case into risk tiers. The EU AI Act designates uses such as hiring, credit scoring, and critical infrastructure as "high-risk," triggering conformity assessments, technical documentation, and human oversight requirements. General-purpose AI (GPAI) models above 10²⁵ training FLOPs face transparency obligations and, for models deemed to pose systemic risk, additional safety evaluations.2
Compute thresholds: Training compute (measured in floating-point operations) serves as a measurable proxy for model capability. The Biden Executive Order used a similar threshold structure before its revocation.4 Critics of threshold-based approaches note that algorithmic efficiency improvements can allow comparable capabilities to be reached with lower compute, potentially allowing high-capability models to fall below regulatory thresholds.
Capability-based triggers: Responsible Scaling Policies use demonstrated capabilities rather than compute alone—for example, whether a model can provide meaningful uplift to someone attempting to create biological, chemical, nuclear, or radiological weapons. Anthropic defines AI Safety Levels analogous to biosafety levels; models must remain below capability thresholds or additional safeguards must be implemented before further development.14
Use-case and deployment rules: Some frameworks focus on how AI is deployed rather than on underlying model properties. Colorado's AI Act requires developers and deployers of high-risk AI—systems making consequential decisions about housing, employment, credit, or healthcare—to conduct impact assessments.6 Texas TRAIGA similarly focuses on deployer disclosure obligations and consumer notification for AI used in consequential decisions.5
Enforcement Flow
For mandatory frameworks, enforcement typically follows a chain from legislation to designated agency to regulated entity:
- EU: The European Commission, acting through the AI Office, enforces GPAI rules. National market surveillance authorities in each EU member state handle enforcement for non-GPAI systems, meaning 27 national bodies of varying technical capacity bear responsibility for most of the EU AI Act's practical implementation.21
- US states: State attorneys general enforce state AI laws directly. Texas's AG has exclusive enforcement authority under TRAIGA (with exceptions for some licensing agencies).5 Colorado's AG has similar authority under the Colorado AI Act.6
- China: The Cyberspace Administration of China (CAC) administers algorithm filing requirements, content standards, and labeling rules for AI services offered to Chinese users.12
For voluntary frameworks, accountability is primarily reputational: public disclosure of commitments, scrutiny from civil society and researchers, and participation in international evaluation processes such as AI Safety Institute assessments. No external enforcement mechanism compels compliance.
Legislation and Regulation
Major regulatory frameworks and legislation across jurisdictions:
International:
- EU AI Act: The first comprehensive AI regulation adopting a risk-based approach. The EU AI Office (125+ staff)1 enforces GPAI rules; national authorities handle the remainder. For the phased rollout timeline (August 2025–2027 obligations) and details on the GPAI Code of Practice, see EU AI Act Phased Enforcement in the Emergent Trends section below.2
- Council of Europe Framework Convention on AI: The first legally binding international AI treaty, establishing human rights standards across AI system lifecycles. Signed by the US and 36 other signatories as of January 2025. The treaty is not yet in force, pending ratification by five states; the European Commission proposed formal EU ratification on June 3, 2025.10
United States:
- California SB 1047: The first US state bill targeting frontier AI model developers directly; vetoed by Governor Newsom on September 29, 2024. Newsom cited the bill's failure to differentiate between high-risk and low-risk deployment contexts and its potential to create a "false sense of security" by overlooking smaller models presenting similar risks.15
- California SB 53: Signed September 29, 2025; described by the Governor's office as the first enforceable US regulatory framework for the most advanced AI systems, developed with input from an advisory group including AI researchers at Stanford and UC Berkeley.7
- US Executive Order on AI: Biden's Executive Order 14110 (October 2023) directed agencies to develop AI risk management standards and used compute thresholds to define frontier models. Trump revoked EO 14110 on January 20, 2025, directing agencies to rescind actions inconsistent with new policy goals emphasizing reduced regulatory barriers. Agency-specific guidance already issued under the Biden EO remained in effect unless individually rescinded.4
- NIST AI Risk Management Framework: Voluntary framework released January 2023. Referenced in sector-specific AI expectations by the CFPB, FDA, SEC, FTC, and EEOC. A Generative AI Profile (NIST-AI-600-1) was released July 2024.3 No public data is available on the count of organizations that have formally adopted the framework.
- US State AI Legislation: As of mid-2026, Colorado, Texas, and California have enacted AI governance laws with enforcement mechanisms.
- New York RAISE Act: State legislation requiring safety protocols for frontier AI systems; legislative status as of mid-2026 varies.
- Texas TRAIGA: Signed June 22, 2025, effective January 1, 2026. Enforced by the Texas AG with civil penalties of $10,000–$200,000 per violation. The statute creates a Texas AI Advisory Council, which is expressly prohibited from issuing binding regulations.5
- Colorado AI Act: First passed May 2024; implementation delayed to June 30, 2026 by an August 2025 amendment. Requires developers and deployers of high-risk AI to conduct risk-management programs and impact assessments. Colorado AG has exclusive enforcement authority with a 60-day cure period before action.6
Other jurisdictions:
- Canada AIDA: Canada's proposed Artificial Intelligence and Data Act (part of Bill C-27) died when Prime Minister Trudeau's resignation caused parliamentary prorogation on January 6, 2025, leaving Canada without comprehensive federal AI legislation.13
- China AI Regulations: China's Generative AI Measures took effect August 15, 2023, requiring algorithm registration with the CAC, content labeling, and user verification for public-facing generative AI. As of late 2025, the CAC had approved thousands of algorithm filings and registered hundreds of generative AI platforms. New mandatory labeling rules for AI-generated content took effect September 1, 2025.12
Analysis:
- Failed and Stalled AI Policy Proposals: Tracking proposals that did not advance and why.
Compute Governance
Technical governance approaches leveraging the physical infrastructure of AI:
- AI Chip Export Controls: US Bureau of Industry and Security restrictions on advanced AI chip exports, initially implemented October 2022 and expanded in 2023 and 2024. The controls face documented circumvention: a smuggling network involving at least $160 million in NVIDIA H100 and H200 GPUs operated between October 2024 and May 2025; the co-founder of Super Micro Computer was arrested in March 2026 for conspiring to divert chip-equipped servers to China via intermediaries using falsified end-user certificates; and at least 8 Chinese AI chip-smuggling networks with transactions over $100 million each were identified as of 2025. The Center for a New American Security estimated 10,000 to hundreds of thousands of AI chips were smuggled to China in 2024 alone.11
- Compute Thresholds: Using training compute as a measurable threshold for regulatory triggers. The EU AI Act uses 10²⁵ FLOPs as the threshold above which GPAI models are presumed to pose systemic risk. Critics note that algorithmic efficiency improvements may allow high-capability models to remain below such thresholds over time.
- Compute Monitoring: Approaches to tracking and verifying AI training runs, including proposals for cloud service provider reporting requirements and Know Your Customer requirements for large compute purchases.
- Hardware-Enabled Governance: Technical mechanisms proposed for embedding monitoring or enforcement capabilities directly in AI hardware, such as location-verified compute usage and hardware-level model auditing.
- International Compute Regimes: Proposals for international coordination on compute governance, including analogues to the IAEA or CERN. No such regime has been established as of mid-2026.
International Coordination
Mechanisms for cross-border cooperation on AI safety:
- International AI Safety Summits: A series of international summits beginning with Bletchley Park (November 2023), continuing at Seoul (May 2024) and Paris (February 2025).
- Bletchley Declaration: First international agreement on AI safety, signed by 28 countries at the Bletchley Summit (November 2023). Established the principle of international AI safety cooperation without binding commitments.
- Seoul Declaration: Follow-up international commitment on frontier AI safety, adopted at the Seoul Summit (May 21–22, 2024). The summit secured "Frontier AI Safety Commitments" from 16 leading AI companies—the first time companies signed specific commitments to define risk thresholds and implement mitigations when capabilities could pose "severe risks."8 An International Network of AI Safety Institutes was also announced at Seoul.8
- Paris AI Action Summit (February 2025): Held February 10–12, 2025, with over 1,000 participants from more than 100 countries. A "Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet" was signed by approximately 60 nations and supranational organizations including China, India, the EU, and the African Union. The US and UK were notable non-signatories to the main declaration.16 French President Macron announced €109 billion in private-sector AI investment commitments; the EU announced €200 billion in AI-related investment.16 The summit revealed divergent priorities among major AI-developing nations, with the US emphasizing reduced regulatory burden and the EU and Global South emphasizing inclusive and sustainable development.
- International Coordination Mechanisms: Bilateral dialogues, multilateral treaties, and institutional networks for ongoing coordination beyond summit declarations.
Industry Self-Regulation
Voluntary commitments and industry-led safety frameworks:
- Responsible Scaling Policies: Framework first published by Anthropic, subsequently adopted in broadly similar form by OpenAI and Google DeepMind. RSPs define capability thresholds at which additional safeguards must be implemented before continued development. Anthropic's RSP version 3.0, released in 2025, refined evaluation processes using "safety case" methodologies and acknowledged that the science of model evaluation was not yet well-developed enough for precisely defined capability thresholds.14 Proponents argue RSPs represent the most technically sophisticated governance mechanism in operation. Critics note that labs conduct their own capability evaluations, that no RSP threshold has been publicly reported as triggered, and that no external body verifies compliance. Both OpenAI and Google DeepMind test models on their ability to assist with biological weapons development as part of their frameworks.14
- Voluntary Industry Commitments: Commitments secured by the Biden administration from initially seven companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI) in July 2023, with eight more companies joining in September 2023.9 Commitments included pre-deployment security testing, information sharing on AI risks, research into content provenance systems, and investments in cybersecurity safeguards. No verification mechanism was established and no accountability procedure was specified.9
- Model Registries: Proposals for centralized databases tracking frontier AI models, their capabilities, and deployment status. No binding international model registry exists as of mid-2026; China's algorithm filing system with the CAC functions as a national-level analogue.12
2025–2026 Emergent Trends
EU AI Act Phased Enforcement
The EU AI Act is rolling out in phases rather than taking immediate full effect:
- August 2, 2025: GPAI model obligations entered application for new models placed on market after this date. The GPAI Code of Practice was endorsed by the Commission on August 1, 2025 as a voluntary compliance tool.2
- August 2, 2026: Commission enforcement powers (including fines) for GPAI providers become active; high-risk AI system requirements for healthcare, finance, employment, and critical infrastructure also become fully applicable.2
- August 2, 2027: Legacy GPAI models placed on market before August 2025 must comply.2
The EU AI Office, responsible for GPAI enforcement, employed more than 125 staff as of 2025, with plans to exceed 140 by end of 2025, organized into five units covering regulation, AI safety, innovation, and societal applications.1
US Federal Policy Reversal
Trump's administration revoked Biden's Executive Order 14110 on January 20, 2025, its first day in office, directing agencies to suspend, revise, or rescind prior AI policies inconsistent with the new administration's emphasis on reducing regulatory barriers to AI development. As of mid-2026, replacement federal AI policies remain incomplete.4 In December 2025, the administration issued an executive order directing federal agencies to challenge state AI laws and establish a minimally burdensome national standard, creating potential conflict with state-level legislation enacted in the same period.7
State-Level Law Proliferation
Following the veto of California SB 1047 in September 2024, US AI governance activity shifted substantially toward deployment-focused state legislation:
- California SB 53 (signed September 2025): Targets frontier model developers with transparency and capability assessment requirements.7
- Texas TRAIGA (signed June 2025, effective January 2026): Focuses on deployer disclosure obligations and consumer notification requirements for AI used in consequential decisions.5
- Colorado AI Act (effective June 2026): Requires impact assessments for high-risk AI affecting individuals in consequential decisions; further amendments anticipated in the 2026 legislative session.6
- Canada AIDA died in January 2025 without passage.13
The movement from SB 1047's model-developer-focused approach to deployment-focused frameworks reflects a pattern in which legislators focused on observable deployments rather than underlying model capabilities—a choice with implications for which risks governance mechanisms can address.
Seoul-to-Paris Summit Trajectory
The international AI safety summit series has moved through three phases:
- Bletchley (November 2023): Established the principle of international AI safety cooperation; 28-country declaration; no binding commitments.
- Seoul (May 2024): First time companies signed specific safety commitments with defined risk thresholds; 16 companies; International Network of AI Safety Institutes announced.8
- Paris (February 2025): Largest summit in the series (~100 countries); ~60-nation investment and sustainability declaration; US and UK did not sign the main declaration; investment pledges totaling €109B announced.16
The trajectory shows expanding participation but also diverging national priorities: the US has increasingly prioritized AI development speed over safety-focused coordination, while the EU and signatories from the Global South have emphasized inclusive and sustainable AI. No subsequent summit has been announced as the canonical next step as of mid-2026.
Risks Addressed
AI governance mechanisms address different categories of risk with uneven coverage:
| Risk Category | Mechanisms Addressing It | Coverage as of Mid-2026 |
|---|---|---|
| Discriminatory automated decisions (hiring, credit, housing) | EU AI Act (high-risk requirements), Colorado AI Act, Texas TRAIGA | Mandatory in EU and two US states; impact assessments required |
| AI-enabled weapons development (CBRN) | RSPs (capability thresholds), Seoul commitments (16 companies) | Voluntary frameworks only; no binding prohibition |
| Malicious AI use (fraud, disinformation) | National content laws, China's labeling requirements, content moderation policies | Patchy; varies significantly by jurisdiction |
| Concentration of AI capabilities and market power | General antitrust review | No AI-specific governance mechanism |
| Frontier model catastrophic or existential risks | RSPs (voluntary), AI Safety Institute evaluations | Voluntary only; no mandatory regime as of mid-2026 |
| Unauthorized technology transfer | US chip export controls | Active enforcement with documented significant evasion11 |
Significant governance gaps remain. No binding international regime covers catastrophic risks from frontier AI systems. No governance mechanism directly addresses deceptive alignment or scheming risks in deployed models. No framework addresses recursive self-improvement scenarios. The risk categories attracting the most governance attention (discriminatory decisions, content harms) are not the same as the risk categories AI safety researchers assess as highest-stakes (catastrophic or irreversible harm from advanced systems).
Limitations
Enforcement Capacity Gaps
Most governance frameworks assume enforcement capacity that may not exist at the required scale. The EU AI Office has 125+ staff to oversee GPAI models used by hundreds of millions of EU residents, while 27 national market surveillance authorities of varying technical sophistication handle enforcement for most other AI systems. State attorneys general in the US similarly lack dedicated technical staff to evaluate AI system capabilities or deployment contexts independently.
Pace Mismatch
Legislative processes in most democracies operate on 1–3 year timescales. The EU AI Act took approximately three years from initial proposal to GPAI obligations entering force. During that period, frontier AI capability advanced substantially. Regulations calibrated to 2023-era systems may require revision before full enforcement even begins.
Jurisdictional Arbitrage
AI development is global, but governance is primarily national. Companies can in principle site frontier training runs in jurisdictions with less stringent requirements. The absence of binding international coordination agreements as of mid-2026 means no mechanism prevents this. The Paris 2025 summit illustrated the limits of consensus-building: the world's largest AI-developing nation did not sign the main declaration.16
Technical Expertise Deficits
Regulatory agencies lack staff with expertise to independently evaluate advanced AI systems. Capability evaluations under RSPs are conducted by the labs themselves, supplemented by some external red-teaming at AI Safety Institutes; no mandatory independent auditing requirement exists for any framework. This creates a dependency on self-reporting that is structural rather than temporary.
Verification Challenges
Independent verification of compliance is limited across both voluntary and mandatory frameworks. The Biden administration's 2023 voluntary commitments specified no verification mechanism.9 Chip export controls face organized circumvention—see Compute Governance above for documented cases including smuggling networks and falsified end-user certificates.11
Competing Perspectives on Governance Expansion
Critics of expanded AI governance include: (1) those who argue that compliance costs advantage large incumbent firms over smaller competitors and startups; (2) those concerned about regulatory capture, where capable AI developers shape rules that entrench their market position; (3) those who contend that regulations calibrated to current capabilities misallocate compliance costs without meaningfully addressing risks from future, qualitatively more capable systems; and (4) those who argue that international AI governance coordination is unlikely to succeed given US-China strategic competition, making unilateral Western regulation costly without global benefit. Proponents of governance frameworks contend that even imperfect voluntary commitments establish norms that constrain industry behavior, that mandatory requirements create accountability infrastructure useful when higher-stakes decisions arise, and that early governance frameworks can be refined as capabilities and risks become clearer.
Governance Assessment
- AI Governance and Policy: Broader analysis of governance approaches and their effectiveness
- Policy Effectiveness Assessment: Evaluating which governance interventions actually reduce risk
- AI Governance Effectiveness Analysis: Systematic cross-mechanism analysis with evidence ratings for each governance approach
- AI Power and Influence Map: Mapping of actors, funding flows, and decision-making authority shaping AI governance outcomes
Key Tensions
Speed vs. thoroughness: The pace of AI capability development has outstripped the pace of legislative and regulatory processes in most jurisdictions. The EU AI Act took approximately three years from proposal to GPAI obligations entering force; frontier AI capability advanced substantially in that interval. Regulatory update mechanisms exist in some frameworks but have not yet been tested in practice.
National vs. international: AI development is global, but governance is primarily national, creating coordination challenges and regulatory arbitrage risks. International agreements through the summit process (Bletchley, Seoul, Paris) have been voluntary; no binding international treaty with enforcement mechanisms covers frontier AI risks as of mid-2026. The Paris 2025 summit's main declaration being unsigned by the US and UK illustrates the limits of multilateral consensus-building.16
Voluntary vs. mandatory: Industry self-regulation (RSPs, voluntary commitments) can be implemented faster and may be more technically sophisticated than legislation, which takes years to develop and may be written without deep AI expertise. Proponents of voluntary frameworks argue they are more adaptive and technically credible. Critics note that voluntary commitments lack third-party verification and can be superseded by political changes, as occurred with the Biden voluntary commitments after the EO revocation.94 Mandatory frameworks address the enforcement gap but face their own challenges: regulatory agencies lack the technical capacity to evaluate advanced AI systems independently, and enforcement has lagged capability development in every jurisdiction that has enacted mandatory rules.
Compute governance as potential leverage point: Some analysts argue that compute is among the more tractable inputs to govern because it is physical, concentrated in supply chains, and measurable. Others contend that algorithm efficiency improvements (which allow comparable capabilities at lower compute) and the importance of talent, data, and software make compute governance insufficient as a standalone approach. The debate over whether compute governance is the most promising intervention remains active. As of mid-2026, compute governance has not achieved international coordination, and US export controls face documented circumvention through organized smuggling networks operating at significant scale.11
Footnotes
-
European Artificial Intelligence Office, Wikipedia (2025); European Commission, "European AI Office" (2025). ↩ ↩2 ↩3 ↩4
-
EU AI Act Implementation Timeline (2025); Latham & Watkins, "EU AI Act: GPAI Model Obligations in Force and Final GPAI Code of Practice in Place" (August 2025); DLA Piper, "Latest wave of obligations under the EU AI Act take effect" (August 2025). ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8
-
NIST, "AI Risk Management Framework" (2025). ↩ ↩2
-
Trump White House, "Removing Barriers to American Leadership in Artificial Intelligence" (January 20, 2025); Skadden, "AI: Broad Biden Order Is Withdrawn, but Replacement Policies Are Yet To Be Drafted" (2025). ↩ ↩2 ↩3 ↩4 ↩5
-
Baker Botts, "Texas Enacts Responsible AI Governance Act: What Companies Need to Know" (July 2025); Greenberg Traurig, "TRAIGA: Key Provisions of Texas' New Artificial Intelligence Governance Act" (June 2025). ↩ ↩2 ↩3 ↩4 ↩5
-
Baker Botts, "Colorado AI Act Implementation Delayed" (September 2025); National Law Review, "Colorado Delays AI Act Implementation to June 2026" (2025). ↩ ↩2 ↩3 ↩4 ↩5
-
Office of Governor Newsom, "Governor Newsom signs SB 53" (September 29, 2025). ↩ ↩2 ↩3 ↩4
-
UK Government, "Frontier AI Safety Commitments, AI Seoul Summit 2024" (May 2024); techUK, "Key Outcomes of the AI Seoul Summit" (May 2024). ↩ ↩2 ↩3 ↩4
-
Biden White House, "FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies" (July 21, 2023); FedScoop, "Where Biden's voluntary AI commitments go from here" (2025). ↩ ↩2 ↩3 ↩4 ↩5
-
Framework Convention on Artificial Intelligence, Wikipedia (2025); US Department of State, "The Council of Europe's Framework Convention on AI" (2024). ↩ ↩2
-
AI Frontiers, "How US Export Controls Have (and Haven't) Curbed Chinese AI" (2025); CNBC, "How $160 million worth of export-controlled Nvidia chips were allegedly smuggled into China" (December 31, 2025); CSIS, "The Limits of Chip Export Controls in Meeting the China Challenge" (2025). ↩ ↩2 ↩3 ↩4 ↩5
-
White & Case, "AI Watch: Global Regulatory Tracker – China" (2025). ↩ ↩2 ↩3 ↩4
-
Montreal AI Ethics Institute, "The Death of Canada's Artificial Intelligence and Data Act" (2025). ↩ ↩2 ↩3
-
Anthropic, "Responsible Scaling Policy Version 3.0" (2025); Federation of American Scientists, "Can Preparedness Frameworks Pull Their Weight?" (2024). ↩ ↩2 ↩3
-
Governor Gavin Newsom, SB 1047 Veto Message (September 29, 2024). ↩
-
AI Action Summit, Wikipedia (2025); TechPolicy.Press, "At Paris AI Summit, US, EU, Other Nations Lay Out Divergent Goals" (February 2025). ↩ ↩2 ↩3 ↩4 ↩5
References
This resource provides a structured overview of the EU AI Act's phased implementation schedule, detailing when various provisions come into force from 2024 through 2027. It serves as a reference for organizations and policymakers needing to understand compliance deadlines and regulatory milestones. The timeline covers prohibited AI practices, high-risk system requirements, general-purpose AI rules, and national authority obligations.
Wikipedia article covering the European AI Office, an EU body established in 2024 to oversee implementation and enforcement of the EU AI Act, particularly for general-purpose AI models. It serves as the central regulatory authority coordinating AI governance across EU member states.
The EU AI Office is the European Commission's central body responsible for overseeing and implementing the EU AI Act, particularly for general-purpose AI models. It coordinates AI governance across member states, enforces compliance with AI safety requirements, and supports the development of AI standards and testing methodologies.
Wikipedia overview of the 2025 AI Action Summit held in Paris, an international AI governance conference co-chaired by France and India that drew over 1,000 participants from 100+ countries. It is the third in a series of global AI summits beginning with the 2023 Bletchley Park AI Safety Summit and the 2024 Seoul AI Summit. The summit focused on AI governance, safety, and international coordination rather than purely frontier AI risks.
Coverage of the February 2025 Paris AI Action Summit where US Vice President JD Vance and EU Commission President von der Leyen outlined competing visions for AI development, with the US opposing new regulation and the EU advancing its AI Act. Despite rhetorical differences, both sides share similar goals of spurring domestic innovation through massive infrastructure investment. The US and UK declined to sign the summit's final communique on inclusive and sustainable AI.
At the 2024 Seoul AI Summit, the UK and South Korean governments announced voluntary safety commitments signed by 16 major AI organizations (later expanded to 20), including OpenAI, Google, Meta, Microsoft, and Anthropic. Signatories pledged to assess risks across the AI lifecycle, conduct red-teaming for severe threats, invest in cybersecurity, enable AI-content provenance, and publish safety frameworks before the France AI Summit. These commitments represent a landmark multilateral industry pledge on frontier AI safety practices.
This techUK resource summarizes the major outcomes of the AI Seoul Summit held in May 2024, covering international agreements, safety commitments, and policy developments among participating nations and companies. It highlights progress on AI governance frameworks, frontier AI safety, and the establishment of international cooperation mechanisms following the Bletchley Park AI Safety Summit.
Governor Newsom vetoed California's SB 1047, which would have imposed safety requirements on large AI model developers based on computational thresholds. He argued the bill's size-based regulatory approach is flawed because smaller specialized models can pose equal risks, and that effective AI regulation must be risk-based, contextually aware of deployment environments, and empirically grounded rather than relying on model scale as a proxy for danger.
California Governor Gavin Newsom signed SB 53, the Transparency in Frontier Artificial Intelligence Act, into law in September 2025, establishing transparency and safety guardrails for frontier AI model development. The legislation, authored by Senator Scott Wiener, builds on California's first state-level AI safety report and aims to balance innovation with public protection. It marks a significant state-level governance milestone following the earlier veto of SB 1047.
The Biden-Harris Administration secured voluntary commitments from seven major AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI) around three pillars: safety testing before release, security protections for model weights, and trust mechanisms like watermarking. This represented an interim governance step ahead of a forthcoming executive order and legislative efforts, establishing a public accountability framework for industry self-regulation.
Skadden law firm provides legal analysis of the withdrawal of President Biden's broad AI executive order, examining the implications for AI governance, compute thresholds, and the US AI Safety Institute. The piece covers how the rescission affects existing regulatory frameworks and what it signals for future AI policy direction under the new administration.
The Framework Convention on Artificial Intelligence is an international treaty adopted under the Council of Europe in September 2024, signed by over 50 countries including EU member states. It aims to ensure AI development aligns with human rights, democratic values, and the rule of law, addressing risks like algorithmic discrimination, misinformation, and threats to democratic institutions.
White & Case's China AI Regulatory Tracker provides a comprehensive overview of China's evolving AI regulatory landscape, covering key regulations on algorithmic recommendations, deepfakes, generative AI, and data governance. It situates China's approach within the global context of AI regulation, highlighting how China has pursued a sectoral, iterative regulatory strategy distinct from the EU's comprehensive horizontal framework. The tracker is regularly updated to reflect new legislative and regulatory developments.
An op-ed analysis of the failure of Canada's Artificial Intelligence and Data Act (AIDA), which died in parliamentary committee amid political turmoil. The piece examines AIDA's flaws—exclusionary consultation, vague scope, lack of independent oversight—and explores what AI governance in Canada might look like going forward, including bottom-up community-level approaches.
This CSIS analysis examines the effectiveness and limitations of U.S. semiconductor export controls targeting China, assessing whether these restrictions can meaningfully constrain China's AI and military capabilities. It explores the strategic trade-offs between technology denial, economic costs to U.S. firms, and the risk of accelerating Chinese domestic chip development.
This Federation of American Scientists publication examines whether current AI preparedness frameworks—such as those adopted by major AI labs—are adequate for managing risks as AI systems scale. It analyzes the strengths and limitations of existing evaluation and red-teaming approaches and offers policy recommendations for more robust safety infrastructure.
The NIST AI RMF is a voluntary, consensus-driven framework released in January 2023 to help organizations identify, assess, and manage risks associated with AI systems while promoting trustworthiness across design, development, deployment, and evaluation. It provides structured guidance organized around core functions and is accompanied by a Playbook, Roadmap, and a Generative AI Profile (2024) addressing risks specific to generative AI systems.