US Executive Order on AI
- ClaimExecutive Order 14110 achieved approximately 85% completion of its 150 requirements before revocation, but its complete reversal within 15 months demonstrates that executive action cannot provide durable AI governance compared to congressional legislation.S:3.5I:4.5A:4.5
- Counterint.The 10^26 FLOP compute threshold in Executive Order 14110 was never actually triggered by any AI model during its 15-month existence, with GPT-5 estimated at only 3×10^25 FLOP, suggesting frontier AI development shifted toward inference-time compute and algorithmic efficiency rather than massive pre-training scaling.S:4.0I:4.5A:4.0
- ClaimThe US AI Safety Institute's transformation to CAISI represents a fundamental mission shift from safety evaluation to innovation promotion, with the new mandate explicitly stating 'Innovators will no longer be limited by these standards' and focusing on competitive advantage over safety cooperation.S:4.5I:4.0A:3.5
- Links9 links could use <R> components
Executive Order on Safe, Secure, and Trustworthy AI
Overview
Section titled “Overview”Executive Order 14110↗🏛️ governmentExecutive Order 14110Source ↗Notes on Safe, Secure, and Trustworthy Artificial Intelligence, signed by President Biden on October 30, 2023, represented the most comprehensive federal response to AI governance in US history. The 111-page directive established mandatory reporting requirements for frontier AI systems, created new oversight institutions, and addressed both immediate risks like algorithmic bias and long-term catastrophic risks from advanced AI capabilities. According to analysis by Stanford HAI, the order placed 150 specific requirements on over 50 federal entities—making it the most detailed AI policy directive ever issued by any government.
The order was revoked by President Trump on January 20, 2025, within hours of his assuming office. The White House stated that EO 14110 “hindered AI innovation and imposed onerous and unnecessary government control over the development of AI.” Stanford HAI tracking↗🔗 web★★★★☆Stanford HAIStanford HAI's implementation trackerSource ↗Notes showed that approximately 85% of the order’s 150 distinct requirements had been completed before revocation.
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Duration | 15 months | Oct 30, 2023 to Jan 20, 2025 |
| Scope | 150+ requirements | Across 50+ federal entities per Stanford HAI |
| Implementation | ≈85% completed | 13/13 management requirements fully implemented per GAO |
| Budget Impact | $10M initial, $47.7M requested | AISI received $10M FY2024; Biden requested +$47.7M for FY2025 |
| Companies Affected | Fewer than 15 | BIS assessment: no more than 15 companies exceeded compute thresholds |
| Enforcement | Weak | No specified penalties; relied on voluntary cooperation |
| Durability | Revoked Day 1 | Executive action vulnerable to administration change |
| Legacy | Partial survival | Final rules (KYC) require formal rulemaking to rescind; AISI → CAISI June 2025 |
For AI safety, the order represented both progress and limitations. It normalized government oversight of frontier AI development and created institutional capacity through the US AI Safety Institute. Yet it primarily focused on transparency and voluntary cooperation rather than mandatory safety requirements or deployment restrictions.
Key Provisions and Mechanisms
Section titled “Key Provisions and Mechanisms”Compute-Based Reporting Framework
Section titled “Compute-Based Reporting Framework”The order’s most innovative feature was its use of computational thresholds to trigger regulatory requirements. Companies training models using more than 10^26 floating-point operations (FLOP) were required to notify the Department of Commerce before and during training, share safety testing results, and provide detailed information about model capabilities, cybersecurity measures, and red-team testing outcomes.
Compute Threshold Comparison
Section titled “Compute Threshold Comparison”| Threshold | Application | Training Cost Estimate | Models Affected |
|---|---|---|---|
| 10^26 FLOP | General dual-use foundation models | $10-100M per training run↗🔗 web\$10-100M per training runSource ↗Notes | Next-gen frontier models (GPT-5 class) |
| 10^23 FLOP | Biological sequence models | ≈$10-100K per training run | Specialized bio-AI tools |
| 10^20 FLOP/s | Computing cluster capacity threshold | N/A | Large data centers |
| GPT-4 (reference) | Estimated at ≈2 × 10^25 FLOP | ≈$100M | Just under general threshold |
| GPT-5 (reference) | Estimated at ≈3 × 10^25 FLOP | ≈$200M+ | Still below threshold |
| GPT-3 (reference) | 3.14 × 10^23 FLOP | ≈$1M | ≈318x below threshold |
A Biden Administration official stated that “the threshold was set such that current models wouldn’t be captured but the next generation state-of-the-art models likely would.” The Bureau of Industry and Security assessed↗🔗 webBureau of Industry and Security assessedSource ↗Notes that no more than 15 companies exceeded the reporting thresholds for models and computing clusters.
No model ever triggered the threshold before revocation. Epoch AI estimated GPT-5 pretraining at approximately 3 × 10^25 FLOP—still below the 10^26 threshold. This reflects a shift in frontier AI development: rather than scaling pre-training compute by orders of magnitude, labs increasingly focus on inference-time compute (reasoning models like OpenAI o1) and algorithmic efficiency improvements. xAI’s Colossus data center may have approached 10^26 FLOP for some training runs, but this remains unconfirmed.
The separate 10^23 FLOP threshold for biological sequence models reflected concerns that even smaller models could assist in bioweapon development—approximately 1,000 times less compute than the general threshold, acknowledging that biological design capabilities may emerge at lower scales than general intelligence capabilities.
The compute-based approach offered several advantages over capability-based regulations. FLOP measurements are objective and difficult to manipulate, unlike subjective assessments of AI capabilities. The thresholds also provided predictability for developers. However, the static nature of these numbers created risks of obsolescence as algorithmic efficiency improves—researchers estimated↗🔗 webresearchers estimatedSource ↗Notes the thresholds could become outdated within 3-5 years. According to Fenwick analysis, algorithmic improvements of approximately 2-3x per year mean a model that would have required 10^26 FLOP in 2023 might achieve equivalent capabilities with 10^25 FLOP by 2026—rendering static thresholds increasingly ineffective.
Institutional Infrastructure Creation
Section titled “Institutional Infrastructure Creation”The order established the US AI Safety Institute (AISI) within the National Institute of Standards and Technology, tasked with developing evaluation methodologies, conducting safety assessments, and coordinating with international partners. Unlike purely advisory bodies, AISI had operational responsibilities including direct testing of frontier models and developing technical standards for the broader AI ecosystem.
AISI Timeline and Development
Section titled “AISI Timeline and Development”| Date | Event |
|---|---|
| Nov 2023 | AISI founded at NIST, one day after EO 14110 signed |
| Feb 2024 | Elizabeth Kelly appointed as director; AISIC consortium created↗🏛️ government★★★★★NISTInternational Network of AI Safety InstitutesSource ↗Notes with 200+ member organizations |
| Mar 2024 | $10M initial budget allocated (vs. $17.7M FY2025 request) |
| May 2024 | NIST Director warns only $1M actually available↗🔗 web\$1M actually availableSource ↗Notes; “very difficult without additional funding” |
| Aug 2024 | Agreements signed↗🏛️ government★★★★★NISTMOU with US AI Safety InstituteSource ↗Notes with Anthropic and OpenAI for pre-deployment testing |
| Nov 2024 | First joint evaluation with UK AISI: Claude 3.5 Sonnet assessment↗🏛️ government★★★★★NISTPre-deployment evaluation of Claude 3.5 SonnetSource ↗Notes |
| Dec 2024 | OpenAI o1 model evaluation↗🏛️ government★★★★★NISTPre-Deployment Evaluation of OpenAI's o1 ModelJoint evaluation by US and UK AI Safety Institutes tested OpenAI's o1 model across three domains, comparing its performance to reference models and assessing potential capabilit...Source ↗Notes published |
| Jan 2025 | EO 14110 revoked; AISI future uncertain |
| Feb 2025 | Elizabeth Kelly resigns as director; NIST layoffs announced |
| Jun 2025 | Renamed to Center for AI Standards and Innovation (CAISI); mission refocused from safety to innovation |
AISI’s creation paralleled the UK’s AI Safety Institute, with the two signing cooperation agreements and developing shared evaluation frameworks. The November 2024 joint evaluation of Claude 3.5 Sonnet↗🏛️ government★★★★☆UK AI Safety InstituteNovember 2024 joint evaluation of Claude 3.5 SonnetSource ↗Notes tested biological capabilities, cyber capabilities, software/AI development, and safeguard efficacy—representing the first such government-led assessment of a frontier model.
However, AISI faced significant resource constraints. With only $1-10M in actual funding versus the $17.7M requested, and staffing well below the estimated 200+ personnel needed for full capacity, the institute struggled to match the technical sophistication of private AI laboratories.
Global AI Safety Institute Comparison
Section titled “Global AI Safety Institute Comparison”| Institute | Established | Budget (Annual) | Staff | Key Activities |
|---|---|---|---|---|
| US AISI/CAISI | Nov 2023 | $10M (FY24); $6M actual spending | ≈50 estimated | Model evaluation; standards development |
| UK AISI | Nov 2023 | £100M (≈$125M) over 3 years | 100+ | Pre-deployment testing; international coordination |
| Japan AISI | Feb 2024 | ¥2B (≈$13M) initial | ≈30 | Standards research; evaluation frameworks |
| Singapore AISI | Feb 2024 | Not disclosed | ≈20 | Testing frameworks; regional coordination |
| Canada AISI | Nov 2024 | C$50M ($37M) pledged | Not disclosed | Launched Nov 2024 at SF summit |
| EU AI Office | Feb 2024 | Part of EC budget | ≈140 | Regulatory enforcement; standards |
The US AISI’s $10M budget contrasts sharply with the UK’s £100M commitment. NIST Director Laurie Locascio warned in May 2024 that only $1M was actually available, stating it would be “very, very tough” to continue operations without additional funding.
Leadership Transition and Organizational Uncertainty
Section titled “Leadership Transition and Organizational Uncertainty”Elizabeth Kelly, the inaugural AISI director, resigned on February 6, 2025. In her departure announcement, she stated: “I am confident that AISI’s future is bright and its mission remains vital to the future of AI innovation.” NIST Director Laurie Locascio also departed at the start of 2025 to head the American National Standards Institute (ANSI). Reports emerged that the Trump administration planned to lay off up to 500 NIST staffers, which posed particular risk for AISI as a new organization where most employees remained on probation.
Cloud Compute Governance
Section titled “Cloud Compute Governance”The order introduced “Know Your Customer” (KYC) requirements for Infrastructure-as-a-Service (IaaS) providers, mandating that cloud computing companies verify the identity of foreign customers and monitor large training runs. The Bureau of Industry and Security proposed rule↗🔗 webDepartment of Commerce's proposed ruleSource ↗Notes required US IaaS providers to implement Customer Identification Programs (CIP) including:
- Collection of customer name, address, payment source, email, telephone, and IP addresses
- Verification of whether beneficial owners are US persons
- Reporting to Commerce when foreign customers train large AI models with potential malicious applications
- Violations subject to civil and criminal penalties under the International Emergency Economic Powers Act
These requirements reflected recognition that compute infrastructure represents a chokepoint in AI development that the US can potentially control. By leveraging American companies’ dominance in cloud computing, the order extended US regulatory reach to foreign AI developers who rely on American infrastructure—complementing export controls on AI chips.
The practical implementation faced several challenges. Defining “large training runs” in real-time requires technical sophistication from cloud providers, who must distinguish AI training from other compute-intensive applications. Moreover, determined adversaries might circumvent these requirements by using non-US cloud providers or developing domestic computing capabilities.
Safety Implications and Risk Assessment
Section titled “Safety Implications and Risk Assessment”Promising Aspects for AI Safety
Section titled “Promising Aspects for AI Safety”The order’s most significant safety contribution is establishing the principle that frontier AI development requires government oversight. By creating mandatory reporting requirements and institutional evaluation capacity, it moves beyond purely voluntary industry commitments toward structured accountability. The compute-based thresholds provide objective criteria that avoid subjective judgments about AI capabilities while capturing systems of genuine concern.
The institutional infrastructure created by the order builds long-term capacity for AI governance that could prove crucial as capabilities advance. AISI’s technical expertise and evaluation methodologies may become essential tools for assessing increasingly powerful systems. The institute’s international coordination role also creates foundations for global governance frameworks that could address catastrophic risks requiring multilateral cooperation.
The order’s breadth across multiple risk categories—from algorithmic bias to national security threats—reflects sophisticated understanding of AI’s diverse impact pathways. By addressing both immediate harms and long-term risks simultaneously, it avoids the false dichotomy between near-term and existential AI safety concerns. The integration of fairness, security, and catastrophic risk considerations within a single framework could prove influential for future governance approaches.
Concerning Limitations
Section titled “Concerning Limitations”Despite its comprehensive scope, the order lacks mechanisms to actually prevent the development or deployment of dangerous AI systems. The reporting requirements provide visibility but not control, and the order includes no authority to pause training runs or restrict model releases based on safety concerns. This represents a fundamental limitation for addressing catastrophic risks that might emerge from future AI systems.
The voluntary nature of many provisions weakens the order’s potential effectiveness. While reporting requirements are mandatory, many safety-related provisions rely on industry cooperation rather than enforceable mandates. Companies that choose not to comply face unclear consequences, undermining the order’s credibility as a regulatory framework. The absence of specified penalties or enforcement mechanisms reflects the limited authority available through executive action.
The order’s durability remains highly uncertain given its status as executive action rather than legislation. Future administrations could modify or revoke its provisions entirely, creating regulatory uncertainty that might discourage long-term compliance investments. This political fragility represents a significant weakness for addressing long-term AI risks that require sustained governance approaches spanning multiple electoral cycles.
International Comparison of AI Compute Thresholds
Section titled “International Comparison of AI Compute Thresholds”| Jurisdiction | Threshold | Scope | Obligations | Status |
|---|---|---|---|---|
| US EO 14110 | 10^26 FLOP | General dual-use models | Report to Commerce; share red-team results | Revoked Jan 2025 |
| US EO 14110 | 10^23 FLOP | Biological sequence models | Same as above | Revoked Jan 2025 |
| EU AI Act | 10^25 FLOP | GPAI with systemic risk | Registration; model evaluation; incident reporting | In force Aug 2025 |
| UK (voluntary) | None specified | Frontier models | Voluntary pre-deployment testing with UK AISI | Active |
| China (proposed) | Not compute-based | Foundation models serving public | Registration; security assessment; content moderation | Partial implementation |
The EU AI Act sets a 10x lower threshold (10^25 vs 10^26 FLOP) than the US EO did, meaning more models face regulatory obligations in Europe. The US threshold was intentionally set high—a Biden Administration official stated it was designed so “current models wouldn’t be captured but the next generation state-of-the-art models likely would.”
Revocation and Aftermath
Section titled “Revocation and Aftermath”Trump Administration Response
Section titled “Trump Administration Response”On January 20, 2025, President Trump revoked Executive Order 14110 within hours of assuming office. The White House fact sheet stated↗🔗 webfact sheet statedSource ↗Notes that the order “hindered AI innovation and imposed onerous and unnecessary government control over the development of AI.”
Policy Paradigm Comparison
Section titled “Policy Paradigm Comparison”| Dimension | Biden EO 14110 | Trump EO 14179 & Subsequent Orders |
|---|---|---|
| Primary framing | Safety and trustworthiness | Innovation and competitiveness |
| Government role | Active oversight and evaluation | Remove barriers; minimize intervention |
| Compute thresholds | 10^26 FLOP triggers mandatory reporting | Revoked; no federal thresholds |
| AISI/CAISI mission | Pre-deployment safety testing | Innovation promotion; national security focus |
| State regulation | Neutral; states develop own frameworks | Aggressive preemption via DOJ litigation |
| International stance | Multilateral safety cooperation | Competitive advantage; refused Paris communique |
| Industry relationship | Mandatory reporting + voluntary testing agreements | Voluntary engagement; “pro-growth” emphasis |
Three days later, on January 23, 2025, Trump signed Executive Order 14179↗🏛️ governmentExecutive Order 14179Source ↗Notes, “Removing Barriers to American Leadership in Artificial Intelligence,” which:
- Directed agencies to identify and revise/rescind all EO 14110 actions “inconsistent with enhancing America’s leadership in AI”
- Mandated development of an “action plan” within 180 days to “sustain and enhance America’s global AI dominance”
- Explicitly framed AI development as a matter of national competitiveness over safety
- Required OMB to revise memoranda M-24-10 and M-24-18 within 60 days
Vice President Vance subsequently stated that “pro-growth AI policies” should be prioritized over safety↗📖 reference★★★☆☆WikipediaUK AI Safety Institute WikipediaSource ↗Notes, and the US refused to sign the February 2025 AI Action Summit communique in Paris.
What Survived the Revocation
Section titled “What Survived the Revocation”The revocation did not automatically repeal everything implemented under EO 14110. Legal analysis↗🔗 webLegal analysisSource ↗Notes indicates:
| Category | Status | Uncertainty |
|---|---|---|
| Completed agency actions | Remain unless specifically reversed | High—under review |
| Final rules (e.g., IaaS KYC) | Require formal rulemaking to rescind | Medium |
| Voluntary industry agreements | Continue unless parties withdraw | Low |
| AISI evaluations completed | Published; cannot be “unreviewed” | None |
| International agreements | Continue; diplomatic relations independent | Low |
| Chief AI Officer designations | Remain at agency discretion | Medium |
The Commerce Department’s Framework for AI Diffusion↗🔗 webCommerce Department's Framework for AI DiffusionSource ↗Notes and other final rules may require separate rulemaking processes to revoke, providing some continuity even as the overall framework shifts.
AISI to CAISI Transformation
Section titled “AISI to CAISI Transformation”In June 2025, the US AI Safety Institute was renamed to the Center for AI Standards and Innovation (CAISI) with a fundamentally different mission. According to Commerce Secretary Howard Lutnick: “For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards. CAISI will evaluate and enhance US innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards.”
This represents a shift from:
- Safety evaluation → Innovation promotion
- Pre-deployment risk assessment → National security focus
- International safety coordination → Competitive advantage emphasis
The December 2025 NIST announcement of $10M in AI centers↗🔗 web\$10M in AI centersSource ↗Notes (with MITRE) and a planned $10M AI for Resilient Manufacturing Institute suggests resources are being redirected toward manufacturing and cybersecurity applications rather than frontier model safety evaluation.
State Law Preemption Order (December 2025)
Section titled “State Law Preemption Order (December 2025)”On December 11, 2025, President Trump signed a new executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which directly targets state-level AI regulation. This order represents a significant expansion of federal AI policy beyond simply revoking Biden-era rules.
| Provision | Mechanism | Timeline |
|---|---|---|
| AI Litigation Task Force | DOJ to sue states over AI laws deemed to obstruct federal policy | Immediate |
| Commerce Department evaluation | Identify “onerous” state AI laws for DOJ referral | 90 days |
| FTC policy statement | Clarify FTC Act preemption of state AI disclosure requirements | 90 days |
| Federal funding leverage | Study withholding rural broadband funding from states with unfavorable AI laws | Under review |
| Legislative recommendation | Prepare proposal for uniform federal AI framework | Ongoing |
The order explicitly targets the Colorado AI Act, claiming it “may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” At minimum, Commerce must identify state laws requiring AI models to alter “truthful outputs” or compel disclosures “that would violate the First Amendment.”
Legal analysts note the executive order cannot itself preempt state law—only Congress or the courts can do so. Until legal challenges are resolved, state AI laws remain enforceable. The order functions as a “pressure-and-positioning instrument” to narrow the practical space for state AI regulation rather than an immediate legal override.
US AI Governance Timeline (2023-2025)
Section titled “US AI Governance Timeline (2023-2025)”Implementation Progress (Pre-Revocation)
Section titled “Implementation Progress (Pre-Revocation)”Completed Actions (Oct 2023 - Jan 2025)
Section titled “Completed Actions (Oct 2023 - Jan 2025)”Stanford HAI’s tracker↗🔗 web★★★★☆Stanford HAIStanford HAI's implementation trackerSource ↗Notes documented approximately 85% completion of the order’s 150 distinct requirements before revocation:
| Policy Area | Requirements | Completion Rate | Key Actions |
|---|---|---|---|
| AI Safety & Security | ≈25 | High | AISI created; evaluation agreements signed |
| Civil Rights & Bias | ≈20 | High | Agency guidance issued |
| Consumer Protection | ≈15 | Medium | Standards development ongoing |
| Labor & Workforce | ≈15 | Medium | Reports published |
| Innovation & Competition | ≈20 | High | Research initiatives launched |
| Government Modernization | ≈30 | High | Chief AI Officers designated |
| International Cooperation | ≈15 | High | UK AISI partnership; international network launched |
| Emerging Threats | ≈10 | Medium | Biosecurity framework under development |
Key Accomplishments
Section titled “Key Accomplishments”Despite its short duration, the order achieved several notable outcomes:
Model Evaluation Precedent: The joint US-UK evaluation of Claude 3.5 Sonnet↗🏛️ government★★★★★NISTPre-deployment evaluation of Claude 3.5 SonnetSource ↗Notes and OpenAI o1↗🏛️ government★★★★★NISTPre-Deployment Evaluation of OpenAI's o1 ModelJoint evaluation by US and UK AI Safety Institutes tested OpenAI's o1 model across three domains, comparing its performance to reference models and assessing potential capabilit...Source ↗Notes established government capacity for pre-deployment testing of frontier models—the first such government-led assessments anywhere. The o1 evaluation notably found the model “solved an additional three cryptography-related challenges that no other model completed.”
International Network: In November 2024, the US launched the International Network of AI Safety Institutes↗🏛️ government★★★★★NISTInternational Network of AI Safety InstitutesSource ↗Notes, establishing formal cooperation with the UK, Canada, Japan, Singapore, and other allies on AI safety research.
Industry Cooperation: Voluntary agreements with Anthropic and OpenAI demonstrated that frontier AI companies would accept government access to pre-release models—a precedent that may persist even after revocation.
Key Uncertainties and Future Outlook
Section titled “Key Uncertainties and Future Outlook”The Broader 2024-2025 Regulatory Landscape
Section titled “The Broader 2024-2025 Regulatory Landscape”The EO 14110 revocation occurred within a rapidly evolving AI policy environment:
| Level | 2023 | 2024 | Change |
|---|---|---|---|
| Federal AI regulations | 25 | 59 | +136% |
| Agencies issuing regulations | 21 | 42 | +100% |
| State AI bills proposed | ≈300 | 629 | +110% |
| State AI bills passed | ≈50 | 131 | +162% |
| Congressional AI bills proposed | ≈100 | 211 | +111% |
| Congressional AI bills passed | 1 | 4 | +300% (from low base) |
| Prior EO compliance (agencies filing inventories) | 53% | Improved | EO drove compliance |
This landscape reveals a core tension: while federal AI governance has fragmented following the EO revocation, state-level activity has accelerated dramatically—a 110% increase in bills proposed and 162% increase in bills passed year-over-year. The December 2025 state preemption order represents an attempt to address this fragmentation by federal assertion rather than federal legislation. According to the Stanford HAI 2025 AI Index, despite receiving over 10,000 public comments on the AI Action Plan, Congress has not passed major AI legislation since the initial AI in Government Act of 2020.
What Happens Next?
Section titled “What Happens Next?”With EO 14110 revoked and AISI transformed into CAISI, several key questions remain:
| Question | Optimistic Scenario | Pessimistic Scenario | Current Assessment |
|---|---|---|---|
| Will voluntary industry agreements continue? | Labs maintain AISI relationships independently | Labs reduce cooperation without mandate | Medium uncertainty—depends on lab incentives |
| Will international coordination survive? | UK/EU/allies continue; US rejoins later | US isolation undermines global frameworks | Medium-high—US refused to sign Paris communique |
| Will Congress legislate AI safety? | Bipartisan legislation codifies key provisions | No legislation; state patchwork emerges | High uncertainty—no major bills advancing |
| Will compute thresholds become obsolete? | Future frameworks adopt capability-based triggers | No governance framework adapts | High—3-5 year threshold for obsolescence |
| Will frontier labs face any oversight? | Industry self-governance; state regulations | No meaningful oversight until incident | Medium-high—depends on state action and incidents |
Lessons for AI Governance
Section titled “Lessons for AI Governance”The EO 14110 experience offers several lessons for future AI governance efforts:
Executive action fragility: The complete revocation within 15 months demonstrates that executive orders cannot provide durable AI governance. Of the approximately 150 requirements in EO 14110, roughly 85% were completed before revocation—yet all this implementation effort could be unwound by a single signature. Any sustainable framework requires congressional legislation or deeply embedded institutional practices that survive administration changes. For comparison, the EU AI Act took 3 years to negotiate but cannot be revoked by a single executive; modification requires parliamentary supermajorities.
Compute thresholds have a shelf life: The 10^26 FLOP threshold, designed to capture “next-generation” models, was never actually triggered before revocation. Researchers estimate↗🔗 webaligned with US Executive Order 14110Source ↗Notes such thresholds become outdated within 3-5 years as algorithmic efficiency improves.
Voluntary cooperation is necessary but insufficient: The Anthropic and OpenAI agreements demonstrated frontier labs will cooperate with government oversight—but this cooperation was voluntary and contingent on political conditions that no longer exist.
International coordination requires US participation: The International Network of AI Safety Institutes↗🏛️ government★★★★★NISTInternational Network of AI Safety InstitutesSource ↗Notes launched just months before the US pivot away from safety-focused governance. Without sustained US engagement, international safety coordination faces significant headwinds.
Sources
Section titled “Sources”Primary Sources
Section titled “Primary Sources”- Executive Order 14110 (Federal Register)↗🏛️ governmentExecutive Order 14110Source ↗Notes - Full text of Biden order
- Executive Order 14179 (Federal Register)↗🏛️ governmentExecutive Order 14179Source ↗Notes - Trump replacement order
- Executive Order on State Law Preemption (White House) - December 2025 state preemption order
- Commerce Secretary Statement on CAISI - AISI transformation announcement
Implementation Tracking
Section titled “Implementation Tracking”- Stanford HAI Executive Action Tracker↗🔗 web★★★★☆Stanford HAIStanford HAI's implementation trackerSource ↗Notes - Implementation progress monitoring
- Stanford HAI AI Index 2025 - Policy landscape analysis
- NIST AI Safety Institute↗🏛️ government★★★★★NISTNIST AI Safety InstituteSource ↗Notes - AISI resources and evaluations
Analysis
Section titled “Analysis”- Georgetown CSET Analysis↗🔗 web★★★★☆CSET GeorgetownGeorgetown CSET AnalysisSource ↗Notes - Policy analysis of the revocation
- Congress.gov CRS Report↗🏛️ government★★★★★US CongressCongress.gov CRS ReportSource ↗Notes - Congressional Research Service analysis
- TechPolicy.Press Analysis - AISI to CAISI renaming implications
- Epoch AI Notes on GPT-5 Compute - Compute threshold analysis
- Gibson Dunn State Preemption Analysis - Legal analysis of December 2025 order
AI Transition Model Context
Section titled “AI Transition Model Context”The US Executive Order (while in effect) affected the Ai Transition Model through Civilizational CompetenceAi Transition Model FactorCivilizational CompetenceSociety's aggregate capacity to navigate AI transition well—including governance effectiveness, epistemic health, coordination capacity, and adaptive resilience.:
| Factor | Parameter | Impact |
|---|---|---|
| Civilizational CompetenceAi Transition Model FactorCivilizational CompetenceSociety's aggregate capacity to navigate AI transition well—including governance effectiveness, epistemic health, coordination capacity, and adaptive resilience. | Regulatory CapacityAi Transition Model ParameterRegulatory CapacityEmpty page with only a component reference - no actual content to evaluate. | Created AISI and compute-based reporting requirements |
| Civilizational CompetenceAi Transition Model FactorCivilizational CompetenceSociety's aggregate capacity to navigate AI transition well—including governance effectiveness, epistemic health, coordination capacity, and adaptive resilience. | Institutional QualityAi Transition Model ParameterInstitutional QualityThis page contains only a React component import with no actual content rendered. It cannot be evaluated for substance, methodology, or conclusions. | Established precedent for government oversight of frontier AI |
| Civilizational CompetenceAi Transition Model FactorCivilizational CompetenceSociety's aggregate capacity to navigate AI transition well—including governance effectiveness, epistemic health, coordination capacity, and adaptive resilience. | International CoordinationAi Transition Model ParameterInternational CoordinationThis page contains only a React component placeholder with no actual content rendered. Cannot assess importance or quality without substantive text. | Launched international AI safety network with allies |
The order’s revocation after 15 months demonstrates the fragility of executive action for AI governance; congressional legislation would provide more durable institutional capacity.
What links here
- Regulatory Capacityai-transition-model-parameter
- Safe and Secure Innovation for Frontier Artificial Intelligence Models Actpolicy
- China AI Regulatory Frameworkpolicy
- International AI Safety Summit Seriespolicy
- Voluntary AI Safety Commitmentspolicy