Skip to content

US Executive Order on AI

📋Page Status
Page Type:ResponseStyle Guide →Intervention/response page
Quality:91 (Comprehensive)
Importance:72.5 (High)
Last edited:2026-01-30 (2 days ago)
Words:3.8k
Backlinks:5
Structure:
📊 12📈 2🔗 38📚 169%Score: 15/15
LLM Summary:Executive Order 14110 (Oct 2023) established compute thresholds (10^26 FLOP general, 10^23 biological) and created AISI, but was revoked after 15 months with ~85% completion. The 10^26 threshold was never triggered before revocation; GPT-5 estimated at 3×10^25 FLOP remained below it, demonstrating threshold obsolescence concerns. International comparison shows EU AI Act set 10x lower threshold (10^25 FLOP) and cannot be revoked by executive action.
Critical Insights (5):
  • ClaimExecutive Order 14110 achieved approximately 85% completion of its 150 requirements before revocation, but its complete reversal within 15 months demonstrates that executive action cannot provide durable AI governance compared to congressional legislation.S:3.5I:4.5A:4.5
  • Counterint.The 10^26 FLOP compute threshold in Executive Order 14110 was never actually triggered by any AI model during its 15-month existence, with GPT-5 estimated at only 3×10^25 FLOP, suggesting frontier AI development shifted toward inference-time compute and algorithmic efficiency rather than massive pre-training scaling.S:4.0I:4.5A:4.0
  • ClaimThe US AI Safety Institute's transformation to CAISI represents a fundamental mission shift from safety evaluation to innovation promotion, with the new mandate explicitly stating 'Innovators will no longer be limited by these standards' and focusing on competitive advantage over safety cooperation.S:4.5I:4.0A:3.5
Issues (1):
  • Links9 links could use <R> components
See also:EA Forum
Policy

Executive Order on Safe, Secure, and Trustworthy AI

Importance72
TypeExecutive Order
Number14110
DurabilityCan be revoked by future president

Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence, signed by President Biden on October 30, 2023, represented the most comprehensive federal response to AI governance in US history. The 111-page directive established mandatory reporting requirements for frontier AI systems, created new oversight institutions, and addressed both immediate risks like algorithmic bias and long-term catastrophic risks from advanced AI capabilities. According to analysis by Stanford HAI, the order placed 150 specific requirements on over 50 federal entities—making it the most detailed AI policy directive ever issued by any government.

The order was revoked by President Trump on January 20, 2025, within hours of his assuming office. The White House stated that EO 14110 “hindered AI innovation and imposed onerous and unnecessary government control over the development of AI.” Stanford HAI tracking showed that approximately 85% of the order’s 150 distinct requirements had been completed before revocation.

DimensionAssessmentEvidence
Duration15 monthsOct 30, 2023 to Jan 20, 2025
Scope150+ requirementsAcross 50+ federal entities per Stanford HAI
Implementation≈85% completed13/13 management requirements fully implemented per GAO
Budget Impact$10M initial, $47.7M requestedAISI received $10M FY2024; Biden requested +$47.7M for FY2025
Companies AffectedFewer than 15BIS assessment: no more than 15 companies exceeded compute thresholds
EnforcementWeakNo specified penalties; relied on voluntary cooperation
DurabilityRevoked Day 1Executive action vulnerable to administration change
LegacyPartial survivalFinal rules (KYC) require formal rulemaking to rescind; AISI → CAISI June 2025

For AI safety, the order represented both progress and limitations. It normalized government oversight of frontier AI development and created institutional capacity through the US AI Safety Institute. Yet it primarily focused on transparency and voluntary cooperation rather than mandatory safety requirements or deployment restrictions.

The order’s most innovative feature was its use of computational thresholds to trigger regulatory requirements. Companies training models using more than 10^26 floating-point operations (FLOP) were required to notify the Department of Commerce before and during training, share safety testing results, and provide detailed information about model capabilities, cybersecurity measures, and red-team testing outcomes.

ThresholdApplicationTraining Cost EstimateModels Affected
10^26 FLOPGeneral dual-use foundation models$10-100M per training runNext-gen frontier models (GPT-5 class)
10^23 FLOPBiological sequence models≈$10-100K per training runSpecialized bio-AI tools
10^20 FLOP/sComputing cluster capacity thresholdN/ALarge data centers
GPT-4 (reference)Estimated at ≈2 × 10^25 FLOP≈$100MJust under general threshold
GPT-5 (reference)Estimated at ≈3 × 10^25 FLOP≈$200M+Still below threshold
GPT-3 (reference)3.14 × 10^23 FLOP≈$1M≈318x below threshold

A Biden Administration official stated that “the threshold was set such that current models wouldn’t be captured but the next generation state-of-the-art models likely would.” The Bureau of Industry and Security assessed that no more than 15 companies exceeded the reporting thresholds for models and computing clusters.

No model ever triggered the threshold before revocation. Epoch AI estimated GPT-5 pretraining at approximately 3 × 10^25 FLOP—still below the 10^26 threshold. This reflects a shift in frontier AI development: rather than scaling pre-training compute by orders of magnitude, labs increasingly focus on inference-time compute (reasoning models like OpenAI o1) and algorithmic efficiency improvements. xAI’s Colossus data center may have approached 10^26 FLOP for some training runs, but this remains unconfirmed.

The separate 10^23 FLOP threshold for biological sequence models reflected concerns that even smaller models could assist in bioweapon development—approximately 1,000 times less compute than the general threshold, acknowledging that biological design capabilities may emerge at lower scales than general intelligence capabilities.

The compute-based approach offered several advantages over capability-based regulations. FLOP measurements are objective and difficult to manipulate, unlike subjective assessments of AI capabilities. The thresholds also provided predictability for developers. However, the static nature of these numbers created risks of obsolescence as algorithmic efficiency improves—researchers estimated the thresholds could become outdated within 3-5 years. According to Fenwick analysis, algorithmic improvements of approximately 2-3x per year mean a model that would have required 10^26 FLOP in 2023 might achieve equivalent capabilities with 10^25 FLOP by 2026—rendering static thresholds increasingly ineffective.

The order established the US AI Safety Institute (AISI) within the National Institute of Standards and Technology, tasked with developing evaluation methodologies, conducting safety assessments, and coordinating with international partners. Unlike purely advisory bodies, AISI had operational responsibilities including direct testing of frontier models and developing technical standards for the broader AI ecosystem.

DateEvent
Nov 2023AISI founded at NIST, one day after EO 14110 signed
Feb 2024Elizabeth Kelly appointed as director; AISIC consortium created with 200+ member organizations
Mar 2024$10M initial budget allocated (vs. $17.7M FY2025 request)
May 2024NIST Director warns only $1M actually available; “very difficult without additional funding”
Aug 2024Agreements signed with Anthropic and OpenAI for pre-deployment testing
Nov 2024First joint evaluation with UK AISI: Claude 3.5 Sonnet assessment
Dec 2024OpenAI o1 model evaluation published
Jan 2025EO 14110 revoked; AISI future uncertain
Feb 2025Elizabeth Kelly resigns as director; NIST layoffs announced
Jun 2025Renamed to Center for AI Standards and Innovation (CAISI); mission refocused from safety to innovation

AISI’s creation paralleled the UK’s AI Safety Institute, with the two signing cooperation agreements and developing shared evaluation frameworks. The November 2024 joint evaluation of Claude 3.5 Sonnet tested biological capabilities, cyber capabilities, software/AI development, and safeguard efficacy—representing the first such government-led assessment of a frontier model.

However, AISI faced significant resource constraints. With only $1-10M in actual funding versus the $17.7M requested, and staffing well below the estimated 200+ personnel needed for full capacity, the institute struggled to match the technical sophistication of private AI laboratories.

InstituteEstablishedBudget (Annual)StaffKey Activities
US AISI/CAISINov 2023$10M (FY24); $6M actual spending≈50 estimatedModel evaluation; standards development
UK AISINov 2023£100M (≈$125M) over 3 years100+Pre-deployment testing; international coordination
Japan AISIFeb 2024¥2B (≈$13M) initial≈30Standards research; evaluation frameworks
Singapore AISIFeb 2024Not disclosed≈20Testing frameworks; regional coordination
Canada AISINov 2024C$50M ($37M) pledgedNot disclosedLaunched Nov 2024 at SF summit
EU AI OfficeFeb 2024Part of EC budget≈140Regulatory enforcement; standards

The US AISI’s $10M budget contrasts sharply with the UK’s £100M commitment. NIST Director Laurie Locascio warned in May 2024 that only $1M was actually available, stating it would be “very, very tough” to continue operations without additional funding.

Leadership Transition and Organizational Uncertainty

Section titled “Leadership Transition and Organizational Uncertainty”

Elizabeth Kelly, the inaugural AISI director, resigned on February 6, 2025. In her departure announcement, she stated: “I am confident that AISI’s future is bright and its mission remains vital to the future of AI innovation.” NIST Director Laurie Locascio also departed at the start of 2025 to head the American National Standards Institute (ANSI). Reports emerged that the Trump administration planned to lay off up to 500 NIST staffers, which posed particular risk for AISI as a new organization where most employees remained on probation.

The order introduced “Know Your Customer” (KYC) requirements for Infrastructure-as-a-Service (IaaS) providers, mandating that cloud computing companies verify the identity of foreign customers and monitor large training runs. The Bureau of Industry and Security proposed rule required US IaaS providers to implement Customer Identification Programs (CIP) including:

  • Collection of customer name, address, payment source, email, telephone, and IP addresses
  • Verification of whether beneficial owners are US persons
  • Reporting to Commerce when foreign customers train large AI models with potential malicious applications
  • Violations subject to civil and criminal penalties under the International Emergency Economic Powers Act

These requirements reflected recognition that compute infrastructure represents a chokepoint in AI development that the US can potentially control. By leveraging American companies’ dominance in cloud computing, the order extended US regulatory reach to foreign AI developers who rely on American infrastructure—complementing export controls on AI chips.

Loading diagram...

The practical implementation faced several challenges. Defining “large training runs” in real-time requires technical sophistication from cloud providers, who must distinguish AI training from other compute-intensive applications. Moreover, determined adversaries might circumvent these requirements by using non-US cloud providers or developing domestic computing capabilities.

The order’s most significant safety contribution is establishing the principle that frontier AI development requires government oversight. By creating mandatory reporting requirements and institutional evaluation capacity, it moves beyond purely voluntary industry commitments toward structured accountability. The compute-based thresholds provide objective criteria that avoid subjective judgments about AI capabilities while capturing systems of genuine concern.

The institutional infrastructure created by the order builds long-term capacity for AI governance that could prove crucial as capabilities advance. AISI’s technical expertise and evaluation methodologies may become essential tools for assessing increasingly powerful systems. The institute’s international coordination role also creates foundations for global governance frameworks that could address catastrophic risks requiring multilateral cooperation.

The order’s breadth across multiple risk categories—from algorithmic bias to national security threats—reflects sophisticated understanding of AI’s diverse impact pathways. By addressing both immediate harms and long-term risks simultaneously, it avoids the false dichotomy between near-term and existential AI safety concerns. The integration of fairness, security, and catastrophic risk considerations within a single framework could prove influential for future governance approaches.

Despite its comprehensive scope, the order lacks mechanisms to actually prevent the development or deployment of dangerous AI systems. The reporting requirements provide visibility but not control, and the order includes no authority to pause training runs or restrict model releases based on safety concerns. This represents a fundamental limitation for addressing catastrophic risks that might emerge from future AI systems.

The voluntary nature of many provisions weakens the order’s potential effectiveness. While reporting requirements are mandatory, many safety-related provisions rely on industry cooperation rather than enforceable mandates. Companies that choose not to comply face unclear consequences, undermining the order’s credibility as a regulatory framework. The absence of specified penalties or enforcement mechanisms reflects the limited authority available through executive action.

The order’s durability remains highly uncertain given its status as executive action rather than legislation. Future administrations could modify or revoke its provisions entirely, creating regulatory uncertainty that might discourage long-term compliance investments. This political fragility represents a significant weakness for addressing long-term AI risks that require sustained governance approaches spanning multiple electoral cycles.

International Comparison of AI Compute Thresholds

Section titled “International Comparison of AI Compute Thresholds”
JurisdictionThresholdScopeObligationsStatus
US EO 1411010^26 FLOPGeneral dual-use modelsReport to Commerce; share red-team resultsRevoked Jan 2025
US EO 1411010^23 FLOPBiological sequence modelsSame as aboveRevoked Jan 2025
EU AI Act10^25 FLOPGPAI with systemic riskRegistration; model evaluation; incident reportingIn force Aug 2025
UK (voluntary)None specifiedFrontier modelsVoluntary pre-deployment testing with UK AISIActive
China (proposed)Not compute-basedFoundation models serving publicRegistration; security assessment; content moderationPartial implementation

The EU AI Act sets a 10x lower threshold (10^25 vs 10^26 FLOP) than the US EO did, meaning more models face regulatory obligations in Europe. The US threshold was intentionally set high—a Biden Administration official stated it was designed so “current models wouldn’t be captured but the next generation state-of-the-art models likely would.”

On January 20, 2025, President Trump revoked Executive Order 14110 within hours of assuming office. The White House fact sheet stated that the order “hindered AI innovation and imposed onerous and unnecessary government control over the development of AI.”

DimensionBiden EO 14110Trump EO 14179 & Subsequent Orders
Primary framingSafety and trustworthinessInnovation and competitiveness
Government roleActive oversight and evaluationRemove barriers; minimize intervention
Compute thresholds10^26 FLOP triggers mandatory reportingRevoked; no federal thresholds
AISI/CAISI missionPre-deployment safety testingInnovation promotion; national security focus
State regulationNeutral; states develop own frameworksAggressive preemption via DOJ litigation
International stanceMultilateral safety cooperationCompetitive advantage; refused Paris communique
Industry relationshipMandatory reporting + voluntary testing agreementsVoluntary engagement; “pro-growth” emphasis

Three days later, on January 23, 2025, Trump signed Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which:

  • Directed agencies to identify and revise/rescind all EO 14110 actions “inconsistent with enhancing America’s leadership in AI”
  • Mandated development of an “action plan” within 180 days to “sustain and enhance America’s global AI dominance”
  • Explicitly framed AI development as a matter of national competitiveness over safety
  • Required OMB to revise memoranda M-24-10 and M-24-18 within 60 days

Vice President Vance subsequently stated that “pro-growth AI policies” should be prioritized over safety, and the US refused to sign the February 2025 AI Action Summit communique in Paris.

The revocation did not automatically repeal everything implemented under EO 14110. Legal analysis indicates:

CategoryStatusUncertainty
Completed agency actionsRemain unless specifically reversedHigh—under review
Final rules (e.g., IaaS KYC)Require formal rulemaking to rescindMedium
Voluntary industry agreementsContinue unless parties withdrawLow
AISI evaluations completedPublished; cannot be “unreviewed”None
International agreementsContinue; diplomatic relations independentLow
Chief AI Officer designationsRemain at agency discretionMedium

The Commerce Department’s Framework for AI Diffusion and other final rules may require separate rulemaking processes to revoke, providing some continuity even as the overall framework shifts.

In June 2025, the US AI Safety Institute was renamed to the Center for AI Standards and Innovation (CAISI) with a fundamentally different mission. According to Commerce Secretary Howard Lutnick: “For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards. CAISI will evaluate and enhance US innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards.”

This represents a shift from:

  • Safety evaluation → Innovation promotion
  • Pre-deployment risk assessment → National security focus
  • International safety coordination → Competitive advantage emphasis

The December 2025 NIST announcement of $10M in AI centers (with MITRE) and a planned $10M AI for Resilient Manufacturing Institute suggests resources are being redirected toward manufacturing and cybersecurity applications rather than frontier model safety evaluation.

State Law Preemption Order (December 2025)

Section titled “State Law Preemption Order (December 2025)”

On December 11, 2025, President Trump signed a new executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which directly targets state-level AI regulation. This order represents a significant expansion of federal AI policy beyond simply revoking Biden-era rules.

ProvisionMechanismTimeline
AI Litigation Task ForceDOJ to sue states over AI laws deemed to obstruct federal policyImmediate
Commerce Department evaluationIdentify “onerous” state AI laws for DOJ referral90 days
FTC policy statementClarify FTC Act preemption of state AI disclosure requirements90 days
Federal funding leverageStudy withholding rural broadband funding from states with unfavorable AI lawsUnder review
Legislative recommendationPrepare proposal for uniform federal AI frameworkOngoing

The order explicitly targets the Colorado AI Act, claiming it “may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” At minimum, Commerce must identify state laws requiring AI models to alter “truthful outputs” or compel disclosures “that would violate the First Amendment.”

Legal analysts note the executive order cannot itself preempt state law—only Congress or the courts can do so. Until legal challenges are resolved, state AI laws remain enforceable. The order functions as a “pressure-and-positioning instrument” to narrow the practical space for state AI regulation rather than an immediate legal override.

Loading diagram...

Stanford HAI’s tracker documented approximately 85% completion of the order’s 150 distinct requirements before revocation:

Policy AreaRequirementsCompletion RateKey Actions
AI Safety & Security≈25HighAISI created; evaluation agreements signed
Civil Rights & Bias≈20HighAgency guidance issued
Consumer Protection≈15MediumStandards development ongoing
Labor & Workforce≈15MediumReports published
Innovation & Competition≈20HighResearch initiatives launched
Government Modernization≈30HighChief AI Officers designated
International Cooperation≈15HighUK AISI partnership; international network launched
Emerging Threats≈10MediumBiosecurity framework under development

Despite its short duration, the order achieved several notable outcomes:

Model Evaluation Precedent: The joint US-UK evaluation of Claude 3.5 Sonnet and OpenAI o1 established government capacity for pre-deployment testing of frontier models—the first such government-led assessments anywhere. The o1 evaluation notably found the model “solved an additional three cryptography-related challenges that no other model completed.”

International Network: In November 2024, the US launched the International Network of AI Safety Institutes, establishing formal cooperation with the UK, Canada, Japan, Singapore, and other allies on AI safety research.

Industry Cooperation: Voluntary agreements with Anthropic and OpenAI demonstrated that frontier AI companies would accept government access to pre-release models—a precedent that may persist even after revocation.

The Broader 2024-2025 Regulatory Landscape

Section titled “The Broader 2024-2025 Regulatory Landscape”

The EO 14110 revocation occurred within a rapidly evolving AI policy environment:

Level20232024Change
Federal AI regulations2559+136%
Agencies issuing regulations2142+100%
State AI bills proposed≈300629+110%
State AI bills passed≈50131+162%
Congressional AI bills proposed≈100211+111%
Congressional AI bills passed14+300% (from low base)
Prior EO compliance (agencies filing inventories)53%ImprovedEO drove compliance

This landscape reveals a core tension: while federal AI governance has fragmented following the EO revocation, state-level activity has accelerated dramatically—a 110% increase in bills proposed and 162% increase in bills passed year-over-year. The December 2025 state preemption order represents an attempt to address this fragmentation by federal assertion rather than federal legislation. According to the Stanford HAI 2025 AI Index, despite receiving over 10,000 public comments on the AI Action Plan, Congress has not passed major AI legislation since the initial AI in Government Act of 2020.

With EO 14110 revoked and AISI transformed into CAISI, several key questions remain:

QuestionOptimistic ScenarioPessimistic ScenarioCurrent Assessment
Will voluntary industry agreements continue?Labs maintain AISI relationships independentlyLabs reduce cooperation without mandateMedium uncertainty—depends on lab incentives
Will international coordination survive?UK/EU/allies continue; US rejoins laterUS isolation undermines global frameworksMedium-high—US refused to sign Paris communique
Will Congress legislate AI safety?Bipartisan legislation codifies key provisionsNo legislation; state patchwork emergesHigh uncertainty—no major bills advancing
Will compute thresholds become obsolete?Future frameworks adopt capability-based triggersNo governance framework adaptsHigh—3-5 year threshold for obsolescence
Will frontier labs face any oversight?Industry self-governance; state regulationsNo meaningful oversight until incidentMedium-high—depends on state action and incidents

The EO 14110 experience offers several lessons for future AI governance efforts:

Executive action fragility: The complete revocation within 15 months demonstrates that executive orders cannot provide durable AI governance. Of the approximately 150 requirements in EO 14110, roughly 85% were completed before revocation—yet all this implementation effort could be unwound by a single signature. Any sustainable framework requires congressional legislation or deeply embedded institutional practices that survive administration changes. For comparison, the EU AI Act took 3 years to negotiate but cannot be revoked by a single executive; modification requires parliamentary supermajorities.

Compute thresholds have a shelf life: The 10^26 FLOP threshold, designed to capture “next-generation” models, was never actually triggered before revocation. Researchers estimate such thresholds become outdated within 3-5 years as algorithmic efficiency improves.

Voluntary cooperation is necessary but insufficient: The Anthropic and OpenAI agreements demonstrated frontier labs will cooperate with government oversight—but this cooperation was voluntary and contingent on political conditions that no longer exist.

International coordination requires US participation: The International Network of AI Safety Institutes launched just months before the US pivot away from safety-focused governance. Without sustained US engagement, international safety coordination faces significant headwinds.



The US Executive Order (while in effect) affected the Ai Transition Model through Civilizational Competence:

FactorParameterImpact
Civilizational CompetenceRegulatory CapacityCreated AISI and compute-based reporting requirements
Civilizational CompetenceInstitutional QualityEstablished precedent for government oversight of frontier AI
Civilizational CompetenceInternational CoordinationLaunched international AI safety network with allies

The order’s revocation after 15 months demonstrates the fragility of executive action for AI governance; congressional legislation would provide more durable institutional capacity.