Skip to content
Longterm Wiki
Updated 2026-04-12HistoryData
Page StatusContent
Edited today5.2k words
Content3/13
SummaryScheduleEntityEdit historyOverview
Tables4/ ~21Diagrams0/ ~2Int. links8/ ~41Ext. links2/ ~26Footnotes0/ ~16References0/ ~16Quotes0Accuracy0

AI Governance Interventions Timeline

Concept

AI Governance Interventions Timeline

A comprehensive, well-structured chronological dataset of 44 major AI governance interventions from 2022–2026, with coded effectiveness assessments revealing that most interventions are either 'partial' or 'too early to tell,' with voluntary frameworks dominating and enforcement-capable mechanisms (notably export controls and the EU AI Act) being the exceptions. The dataset is highly valuable as a reference resource and reveals systemic patterns like the enforcement gap in voluntary commitments and the sharp US deregulatory reversal under the Trump administration.

5.2k words

Data table page — This page feeds the AI Governance and Policy analysis. For conceptual background on how governance strategy interacts with AGI timelines, see Short AI Timeline Policy Implications.

SourceLink
Official Timelines Referencetimelines.issarice.com
Wikipediaen.wikipedia.org

Overview

This page maintains a chronological log of major AI governance interventions from 2022 through mid-2026, intended as a data resource for the AI Governance Effectiveness Analysis. Entries cover a wide range of intervention types: executive orders and presidential directives, domestic legislation (enacted and failed), international agreements and summit declarations, voluntary industry commitments, export controls, AI safety institute formations, and technical standards bodies. Each row includes an assessment field coded as effective, partial, ineffective, or too early to tell, based on available evidence of implementation and observable outcomes.

The period covered is unusually dense. The EU AI Act moved from political agreement (December 2023) through adoption (July 2024) into phased implementation across 2025–2030. The US oscillated between the safety-oriented Executive Order 14110 (October 2023) and its revocation under the deregulatory AI Action Plan (July 2025). Internationally, a sequence of frontier AI safety summits at Bletchley (November 2023), Seoul (May 2024), and Paris (February 2025) produced escalating but largely non-binding commitments. Export controls on advanced AI chips, first imposed in October 2022, were tightened in successive rounds through 2023 and 2025, making them among the more consequential and enforceable interventions in the dataset.

Assessments are provisional. Many interventions are either too recent to evaluate or lack independent monitoring mechanisms. Where disagreement exists in the literature about whether an intervention has had its intended effect, the assessment defaults to partial or too early to tell. The methodology section below describes coding decisions in more detail.


Methodology

Unit of analysis. Each row represents a discrete policy instrument: a signed executive order, an enacted statute, a formally adopted international agreement, a published framework or standard, or a documented voluntary commitment. Draft proposals and white papers without formal adoption are excluded unless they generated observable downstream effects.

Assessment coding. The four assessment codes are defined as follows:

  • Effective — the intervention was implemented substantially as intended and evidence suggests it achieved at least its proximate stated goal.
  • Partial — the intervention was implemented but with significant gaps, delays, or scope limitations that reduced its effectiveness relative to stated goals.
  • Ineffective — the intervention was not implemented, was rescinded before taking effect, or evidence suggests it failed to achieve its proximate goals.
  • Too early to tell — the intervention has been adopted but insufficient time or evidence exists to assess outcomes.

Scope. Interventions are included if they meet one or more of the following criteria: they were adopted by a jurisdiction with significant AI development activity (US, EU member states, China, UK, Singapore, India); they operated at an international or multilateral level; or they were assessed in the AI safety literature as potentially significant for catastrophic risk reduction.

Source limitations. The research base for this table is strongest for US and EU interventions and weakest for China, ASEAN, and Africa. Post-2025 entries carry higher uncertainty. Assessments for interventions still in phased rollout (notably the EU AI Act) may be updated as implementation progresses.


Intervention Log

The table below contains 44 rows covering the period October 2022 through mid-2026. Rows are ordered chronologically by the date the intervention entered into force or was formally adopted.

#DateActorIntervention NameTargetEnforcement MechanismIntended OutcomeObserved OutcomeAssessment
1Oct 7, 2022US Dept of Commerce (BIS)Export Controls on Advanced AI Chips (initial rule)Chinese entities acquiring advanced semiconductors (A100/H100 class)Export license denials; entity listPrevent transfer of compute enabling advanced AI/military applications to ChinaRestricted some direct sales; significant circumvention via third countries reported; NVIDIA redesigned chips (A800/H800) to stay below thresholds initiallyPartial
2Mar 1, 2022China (CAC)Internet Information Service Algorithmic Recommendation ProvisionsDomestic platforms using algorithmic content recommendationRegulatory enforcement by Cyberspace Administration of ChinaAccountability, transparency, and safety in AI-driven content; reduce manipulation risksImplemented with compliance filings; scope limited to recommendation systems, not foundation modelsPartial
3Jan 26, 2023US NISTAI Risk Management Framework (AI RMF 1.0)US organizations developing or deploying AI systemsVoluntary adoption; referenced in federal procurementProvide structured risk management vocabulary and practices for AIWidely cited; adopted as reference by federal agencies and some industry; no binding compliance requirement limits uptakePartial
4Jul 12, 2023White House / Biden AdministrationVoluntary AI Safety Commitments (White House)Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAIVoluntary; no legal penalty for non-complianceSafe, secure, and trustworthy AI development; watermarking, red-teaming, info-sharingCommitments published; implementation uneven; no independent verification mechanism; companies self-reported compliancePartial
5Jul 13, 2023China (CAC)Interim Measures for the Management of Generative AI ServicesDomestic generative AI service providersCAC regulatory enforcement; security assessment requirementRequire safety assessments, content moderation, data labeling, real-name registration for generative AIImplemented; companies required to file; scope limited to services available in China; chilling effect on some releases reportedPartial
6Oct 30, 2023White House / Biden AdministrationExecutive Order 14110 on Safe, Secure, and Trustworthy AIFederal agencies and developers of dual-use foundation models above compute thresholdsPresidential directive; agency rulemaking; DPA invocation authoritySafety testing, reporting to government, cybersecurity standards, fraud protection, equity assessmentsAgencies published implementation reports; significant rulemaking initiated; revoked January 2025 before full implementationIneffective (rescinded)
7Nov 1–2, 2023UK Government / 28 CountriesBletchley Declaration on AI SafetyFrontier AI developers and signatory governmentsNon-binding political declarationInternational recognition of catastrophic AI risks; establish ongoing intergovernmental safety dialogueEstablished AI Safety Institute (UK); influenced Seoul commitments; no binding obligations createdPartial
8Nov 2023UK GovernmentAI Safety Institute (AISI) FormationFrontier AI developers (voluntary engagement)Non-statutory; voluntary testing agreementsConduct evaluations of frontier models for dangerous capabilities; produce public safety reportsConducted evaluations of GPT-4, Claude 3, Gemini; published findings; US AISI announced February 2024; influence on deployment decisions limitedPartial
9Dec 2023EU Council / EU ParliamentEU AI Act — Political Agreement ReachedAI systems developed or deployed in EURegulation with binding legal force; market access conditionsRisk-based regulation: ban high-risk prohibited uses, require conformity assessment for high-risk systemsPolitical agreement reached; formal text finalized; published July 12, 2024; implementation phased through 2030Too early to tell
10Feb 2024US NISTAI Safety Institute (US-AISI) FormationFrontier AI developers (voluntary testing agreements)Non-statutory; voluntary MoUsEvaluate pre-deployment safety of frontier models; complement UK AISIMoUs signed with Anthropic, OpenAI, and others; evaluations conducted; renamed/reorganized under Trump administration 2025Partial
11Mar 2024EU ParliamentEU AI Act — Parliamentary ApprovalSee row 9See row 9Formal legislative adoptionApproved; forwarded to CouncilToo early to tell (milestone)
12Mar 21, 2024UN General AssemblyUN Resolution on AI GovernanceMember states; AI developersNon-binding resolutionInternational cooperation on safe, trustworthy AI; inclusive governanceAdopted unanimously; non-binding; established framework for subsequent UN dialogue; limited enforcementToo early to tell
13Jan 2024UK GovernmentGenerative AI Framework for HM GovernmentUK public sector bodiesInternal government guidance; no statutory forceSafe and responsible use of generative AI in government operationsPublished and distributed; implementation across departments uneven; no compliance monitoring reportedToo early to tell
14Jan 2024US NISTAdversarial Machine Learning Taxonomy (NIST 2-14 series)AI practitioners and developersVoluntary reference standardImprove robustness practices against adversarial attacks on AI systemsPublished; adopted as reference by security community; no binding uptake mechanismToo early to tell
15Mar 2024US OMBOMB Memorandum on AI Governance (M-24-10)US federal agenciesOMB oversight; agency compliance reportingMinimum risk management practices for federal AI use; agency AI officer requirementsAgencies required to designate Chief AI Officers; inventory AI uses; compliance reporting initiated; revoked/superseded 2025Partial
16Mar 2024India (MeitY)Generative AI AdvisoryIndian AI companies and platformsNon-binding advisory; no statutory penaltyResponsible deployment of generative AI; bias mitigation, labeling, legal compliancePublished; voluntary nature limited uptake; India AI Act development ongoingToo early to tell
17May 2024EU CouncilEU AI Act — Council ApprovalSee row 9See row 9Final legislative adoptionApproved; entered into force August 1, 2024Too early to tell (milestone)
18May 2024EU CommissionEU AI Office — EstablishedProviders of general-purpose AI (GPAI) models in EURegulatory enforcement under AI ActOversight of GPAI models; code of practice development; coordinate national authoritiesEstablished; began GPAI code of practice process; first codes of practice finalized by mid-2025Too early to tell
19May 2024Singapore (IMDA/MDDI)Model AI Governance Framework for Generative AIAI developers and deployers using generative AIVoluntary; industry guidanceResponsible generative AI development: accountability, testing, transparencyPublished; widely cited in ASEAN context; voluntary limits enforcementToo early to tell
20May 21–22, 2024UK, Republic of Korea, 16+ countriesSeoul Frontier AI Safety CommitmentsFrontier AI developers (16 companies signed)Voluntary commitments; company-published safety policiesSafety thresholds and policies from frontier labs before deployment; government oversight16 companies published safety frameworks; implementation quality varied; no independent audit mechanismPartial
21May 2024Seoul Summit ParticipantsInternational Network of AI Safety InstitutesNational AI Safety InstitutesSoft coordination; information sharing MoUsCoordinate evaluations and share safety findings across governmentsNetwork established; UK and US AISIs most active; expansion to other states ongoingToo early to tell
22Jul 12, 2024EU (Official Journal)EU AI Act — Publication and Entry into ForceAI systems in EU marketLegally binding regulation; phased compliance deadlinesFull EU AI Act implementation per phased schedulePublished; prohibitions effective February 2, 2025; GPAI rules effective August 2, 2025; high-risk August 2026Too early to tell
23Oct 2023 / Oct 2024US Dept of Commerce (BIS)Export Controls — Updated Rules (Oct 2023 and Oct 2024)Advanced AI chips and related items; country-tier restrictionsExport license system; entity list; third-country controlsClose circumvention loopholes; expand country tiers; restrict cloud access to controlled computeUpdated rules partially closed A800/H800 loophole; country-tier system introduced; circumvention via Gulf states and others continuedPartial
24Sep 2024California Governor (Veto)California SB 1047 — VetoedLarge AI model developers (>10^26 FLOP training runs)N/A (vetoed)Safety evaluations, incident reporting, kill-switch requirements for large modelsVetoed by Governor Newsom; did not become law; cited concerns about chilling AI innovation in CaliforniaIneffective (vetoed)
252024Colorado LegislatureColorado AI Act (SB 24-205)Developers and deployers of high-risk AI systemsState enforcement; private right of action elementsAlgorithmic discrimination protections; transparency requirements for consequential AI decisionsEnacted; compliance obligations phased; one of few enacted state AI bills with binding force; implementation ongoingToo early to tell
262024Utah LegislatureUtah SB 149 (AI Policy Act)AI systems used in regulated sectors (primarily consumer services)State enforcementDisclosure requirements when AI interacts with consumersEnacted; limited scope (disclosure only); regarded as relatively weak interventionPartial
27Nov 2023OpenAI, Google DeepMind, Anthropic, MicrosoftFrontier Model Forum — Operational ActivityFrontier AI developersVoluntary; industry self-governanceIndustry coordination on AI safety research; best practice sharing; red-teaming standardsPublished research outputs; safety fund established; critics argued insufficient independence and no binding obligationsPartial
28Feb 2, 2025EU (AI Act)EU AI Act — Prohibitions and AI Literacy ApplyAI systems in prohibited categories (e.g., social scoring, real-time remote biometric ID, manipulation of subliminal behavior)Direct prohibition under AI Act; national authority enforcementBan prohibited AI practices in EU; establish AI literacy obligations for providersFirst binding deadline passed; European Commission published prohibited AI guidelines; enforcement by national authorities nascentToo early to tell
29Jan 20, 2025White House / Trump AdministrationExecutive Order 14179 — Removing Barriers to American Leadership in AIFederal agencies; prior AI safety EOPresidential directive; agency actionRevoke EO 14110; direct new AI action plan emphasizing innovation and deregulationEO 14110 revoked; OMB M-24-10 superseded; AI Action Plan directed; significant shift in federal postureEffective (as stated goal; safety implications contested)
30Feb 2025Council of EuropeFramework Convention on Artificial IntelligenceSignatory states and their jurisdictionsTreaty obligations on signatories; domestic implementation requiredLegally binding international AI standards on human rights, democracy, rule of lawEntered into force; first binding international AI treaty; ratification count limited at entry into force; long implementation timeline aheadToo early to tell
31Mar 2025ASEAN MembersASEAN Responsible AI Roadmap (2025–2030)ASEAN member state governments and AI developersNon-binding regional roadmapHarmonized AI governance across ASEAN; responsible AI development standardsPublished; voluntary; implementation depends on member state follow-throughToo early to tell
32May 2, 2025EU AI OfficeEU AI Act — GPAI Code of Practice FinalizedProviders of general-purpose AI models in EUCode of practice under AI Act; binding rules apply August 2025Establish obligations for GPAI providers: transparency, copyright compliance, systemic risk assessmentCodes finalized on schedule; industry participation in drafting; binding obligations followed August 2Too early to tell
33May 2025US House of RepresentativesFederal AI Preemption Moratorium — Passed HouseState AI lawsFederal preemption if enacted10-year moratorium on state AI regulation to ensure national consistencyPassed House; rejected by Senate 99–1 in July 2025; did not become lawIneffective (failed in Senate)
34May 2025Singapore (IMDA)Singapore Consensus on AI Safety Research PrioritiesAI safety researchers and frontier labsNon-binding international research agendaAlign global AI safety research priorities; identify key open problemsPublished; endorsed by multiple governments and labs; voluntary research coordination mechanismToo early to tell
35May 2025China (State Council)Draft AI Law — Removed from Legislative PlanDomestic AI developersN/A (deprioritized)Comprehensive domestic AI regulationRemoved from State Council legislative agenda; signals China deprioritizing binding domestic AI law in near termIneffective (deprioritized)
36Jul 2025White House / Trump AdministrationAmerica's AI Action PlanFederal agencies; states; AI industryPresidential action plan; executive orders; federal funding conditionsDeregulatory AI governance; innovation-first; competition with China; delegating safety to private sectorReleased July 23, 2025; three accompanying executive orders; conditions federal funding on states not restricting AI; ongoing implementationToo early to tell
37Jul 2025US SenateFederal AI Preemption Moratorium — Rejected by SenateState AI lawsN/A (rejected)Prevent state AI regulation fragmentationRejected 99–1; 17 Republican governors and 40 state attorneys general opposed; reinforced state role in AI governanceIneffective (rejected)
38Aug 2, 2025EU (AI Act)EU AI Act — GPAI Rules, Governance Bodies, Penalties ApplyGPAI model providers; national AI authoritiesAI Act enforcement; national authority designation; EU AI BoardGPAI model obligations in force; penalties applicable; national authorities designatedDeadline reached; national authority designations completed in most Member States; enforcement capacity varies significantly by countryToo early to tell
39Aug 2025United NationsUN Global Dialogue on AI Governance — EstablishedMember states; international organizations; civil societyUN facilitation; non-binding dialogueInclusive multilateral forum for AI governance; address global South participation gapsEstablished; July 2026 summit planned; early structural criticism that dialogue lacks binding mechanisms or moral compassToo early to tell
40Jul 2025Singapore (IMDA)Singapore Global AI Assurance Sandbox — LaunchAI developers seeking pre-deployment safety assessmentVoluntary; sandbox testingProvide structured safety testing environment; support responsible deploymentLaunched; early-stage uptake; voluntary participation limits scopeToo early to tell
41Nov 2025ChinaNational Cybersecurity Law — Updated with AI ProvisionsDomestic AI systems and data practicesChinese legal enforcementIntegrate AI-specific cybersecurity requirements into national lawEnacted; scope and implementation details limited in available reportingToo early to tell
42Nov 2025India (Government)AI Governance GuidelinesIndian AI developers and deployersNon-binding guidelinesResponsible AI framework for Indian context; complement Digital India Act developmentPublished; voluntary; binding Digital India Act still under developmentToo early to tell
43Feb 2026US (National Security Council)Framework to Advance AI Governance and Risk Management in National SecurityUS national security agencies and contractors using AIExecutive policy; classified and unclassified componentsIntegrate AI risk management into national security decision-makingPublished; limited public detail available on implementationToo early to tell
44Aug 2, 2026EU (AI Act)EU AI Act — High-Risk AI Rules Apply (Annex III)Developers and deployers of high-risk AI systems (hiring, credit, education, law enforcement, etc.)AI Act enforcement; conformity assessment; notified bodiesRequire conformity assessments, registration, human oversight for high-risk AIScheduled deadline; European Commission proposed simplification/delay in late 2025, creating uncertainty; final status pendingToo early to tell

Notable Patterns and Cross-Cutting Observations

Several patterns emerge across the intervention log that are relevant to the broader AI governance effectiveness analysis.

The enforcement gap. The majority of interventions coded as partial share a common structural feature: they establish substantive requirements but lack independent monitoring or verification mechanisms. The White House Voluntary Commitments (row 4), the Seoul Frontier AI Safety Commitments (row 20), and the Singapore Model AI Governance Framework (row 19) all fall into this category. Voluntary frameworks without third-party audit produce compliance that is difficult to distinguish from performative adoption. This is noted in the broader literature as a central limitation of the governance landscape prior to the EU AI Act's binding phase.

Export controls as outlier. The October 2022 semiconductor export controls (row 1) and their 2023–2024 updates (row 23) represent the most enforcement-capable interventions in the dataset. They operate through the existing US export licensing system, carry criminal penalties, and have demonstrated observable effects on chip acquisition patterns. The assessments for these rows nonetheless remain partial because documented circumvention through third-country re-export, cloud-access workarounds, and redesigned products limits their effectiveness relative to stated goals. Compute Governance and Hardware-Enabled Governance literature treats these controls as a significant proof-of-concept for technical AI governance, while noting the ongoing cat-and-mouse dynamic.

The US oscillation. Rows 6, 15, and 29 trace a sharp reversal in US federal AI governance posture. Executive Order 14110 (October 2023) established the most comprehensive federal AI safety requirements to date, including mandatory reporting of large-model safety tests to the government. Its revocation via Executive Order 14179 (January 2025) before most implementing rules had been finalized means the intended outcome was never realized, producing an ineffective assessment on safety grounds — though EO 14179 was effective as a statement of the Trump administration's deregulatory intent. The AI Action Plan (row 36) that followed delegated safety responsibility to the private sector and conditioned federal funding on states not enacting AI restrictions, a posture assessed by critics at Harvard's ethics program as prioritizing economic competition over harm prevention.

The Bletchley–Seoul–Paris sequence. The three frontier AI safety summits produced escalating formality: Bletchley (2023) generated a political declaration; Seoul (2024) produced company-level safety commitments from 16 frontier labs; Paris (2025) built on these with further governmental statements. All three are assessed as partial rather than effective because they generated no binding obligations and relied on companies self-reporting implementation. The summits were nonetheless significant in establishing a shared vocabulary around catastrophic and systemic AI risk and in providing political cover for the creation of national AI Safety Institutes in the UK and US.

EU AI Act as structural anchor. The EU AI Act (rows 9, 11, 17, 22, 28, 38, 44) is the most structurally significant governance intervention in the dataset. Its phased implementation creates binding legal obligations for market access in the EU, backed by penalties and a formal regulatory architecture. Its assessments remain too early to tell throughout because the binding enforcement phases have only recently begun. The proposed delay to the August 2026 high-risk deadline (row 44) illustrates the political pressure on implementation timelines and introduces uncertainty that the dataset tracks but cannot yet resolve.

State-level governance in the US. The failure of California SB 1047 (row 24) and the Senate rejection of the federal preemption moratorium (row 37) together illustrate the contested terrain of US AI governance. SB 1047 was the most ambitious attempt at compute-threshold–based safety requirements for frontier models in any US jurisdiction; its veto removed a potential model for state-level frontier AI governance. The moratorium's 99–1 Senate rejection — opposed by both conservative governors and civil society — preserved state authority but left a fragmented regulatory landscape with over 480 enacted state AI bills as of 2025. Texas TRAIGA represents a continuing state-level effort in this space.


Interventions by Type: Summary Count

Intervention TypeCountEffectivePartialIneffectiveToo Early to Tell
Executive Orders / Presidential Actions51121
Legislation (enacted)40103
Legislation (failed/vetoed)30030
International Declarations / Treaties60204
Voluntary Commitments (industry)40301
Export Controls20200
Technical Standards / Frameworks (non-binding)80206
Institutional Formation (ASIs, offices)60204
Domestic Regulations (China, India, Singapore)60213
Total44115622

Criticisms and Concerns

The governance landscape documented in this table draws significant criticism from multiple directions, which the effectiveness assessments attempt to reflect rather than adjudicate.

Voluntary frameworks are insufficient for catastrophic risks. A recurring critique in both the academic and EA/rationalist literatures is that the majority of governance interventions through 2025 remain voluntary, lack independent verification, and are therefore unlikely to constrain frontier AI development in ways that matter for catastrophic risk. The predominance of partial assessments for voluntary commitments in the table reflects this structural limitation. Critics have argued that relying on the private sector to self-report compliance — as both the White House Voluntary Commitments and Seoul Frontier Commitments do — repeats the pattern of social media self-regulation that produced documented harms without accountability.

Deregulatory US shift creates global governance gap. The Trump administration's January 2025 revocation of EO 14110 and the subsequent AI Action Plan have been characterized by governance researchers as creating a significant gap in the world's most AI-capable jurisdiction. Harvard ethics commentators described the plan as prioritizing economic competition over ethical safeguards and delegating risk management to the private sector without a statutory backstop. The US federal moratorium's failure (99–1 Senate vote) simultaneously prevented the kind of national coordination that could have replaced EO 14110 with a legislative equivalent.

AGI timeline dependency limits governance strategy. Research cited in the EA forum and LessWrong communities argues that the efficacy of governance interventions is deeply sensitive to AI Timelines. Under pre-2030 timelines, broad regulatory and advocacy approaches may be too slow — awareness campaigns and legislative cycles typically take years to produce results. This suggests a mismatch between the timescale of most interventions in this dataset and the timescale on which transformative AI risks may materialize. Some researchers have argued for prioritizing corporate governance and security-focused interventions within existing frontier labs as faster-acting alternatives under short timelines.

Methodological concerns about timeline forecasting itself. A separate strand of criticism questions the value of timeline-dependent governance analysis, arguing that AGI timeline forecasting suffers from deference cycles (where a small number of influential forecasters tacitly inform a wider consensus, creating a false appearance of independent agreement) and poor operationalization of what AGI means. This limits confidence in the conditional assessments.

Implementation gaps even for binding frameworks. The EU AI Act's phased implementation, the proposed delay to the August 2026 high-risk deadline, and the significant variation in national authority capacity across EU member states illustrate that formal adoption does not guarantee effective implementation. The table captures this with too early to tell assessments for most EU AI Act milestones, reflecting genuine uncertainty rather than optimism.


Key Uncertainties

  • Whether EU AI Act enforcement capacity will be sufficient to produce meaningful compliance by frontier model providers, particularly non-EU companies
  • Whether the US federal AI governance vacuum following EO 14110's revocation will be filled by state legislation, industry self-governance, or eventual congressional action
  • Whether export controls on advanced AI chips will remain effective as chip design, manufacturing, and cloud delivery continue to evolve
  • Whether the frontier AI safety summit process (Bletchley → Seoul → Paris) will produce binding mechanisms or remain in the voluntary commitment paradigm
  • Whether the Council of Europe AI Convention will attract ratification from major AI-developing states or remain limited in scope
  • How governance effectiveness assessments for 2025–2026 interventions should be updated as implementation evidence accumulates

Changelog

DateChange
2026-04-12Initial page creation; 44-row intervention log through mid-2026; methodology section; cross-cutting observations

Sources

Related Wiki Pages

Top Related Pages

Risks

AI Proliferation

Analysis

AI Governance Effectiveness AnalysisAI Regulatory Capacity Threshold Model

Organizations

OpenAIGovAI

Concepts

Compute GovernanceGovernance OverviewCompute ThresholdsGovernance-Focused Worldview

Policy

Texas Responsible AI Governance Act (TRAIGA)US Executive Order on Safe, Secure, and Trustworthy AICalifornia SB 53New York RAISE ActSafe and Secure Innovation for Frontier Artificial Intelligence Models ActChina AI Regulatory Framework

Key Debates

Government Regulation vs Industry Self-Governance

Other

Geoffrey Hinton