Executive Order 14179: Removing Barriers to American Leadership in AI
Executive Order 14179: Removing Barriers to American Leadership in AI
EO 14179 represents a major U.S. policy pivot away from precautionary AI governance toward deregulation and competitive dominance, directly revoking Biden-era mandatory safety reporting and risk management frameworks; this article provides an unusually thorough and well-sourced account of the order, its implementing actions, and its implications for AI safety oversight. The article is notably balanced, including a substantial criticisms section covering safety rollbacks, worker protections, state-federal tensions, and constitutional concerns.
Quick Assessment
| Attribute | Detail |
|---|---|
| Type | U.S. Federal Executive Order |
| Signed | January 23, 2025 |
| Signed by | President Donald Trump |
| Revokes | Executive Order 14110 (Biden, October 30, 2023) |
| Key mandate | AI Action Plan within 180 days |
| Action Plan released | July 31, 2025 |
| Primary focus | Deregulation, U.S. AI dominance, economic competitiveness, national security |
| AI safety orientation | Deemphasizes prior safety and equity requirements; no explicit treatment of alignment or existential risk |
Key Links
| Source | Link |
|---|---|
| Official Website | federalregister.gov |
| Wikipedia | en.wikipedia.org |
| Wikidata | wikidata.org |
Overview
Executive Order 14179, formally titled "Removing Barriers to American Leadership in Artificial Intelligence," was signed by President Donald Trump on January 23, 2025, three days after he returned to office. The order establishes as U.S. policy the goal to sustain and enhance America's global AI dominance in service of human flourishing, economic competitiveness, and national security. Its primary mechanism is deregulatory: it revokes Biden-era AI governance structures, directs federal agencies to identify and eliminate regulations seen as impediments to AI deployment, and mandates development of a comprehensive national AI Action Plan.
The order directly replaced US Executive Order on Safe, Secure, and Trustworthy AI (EO 14110), which President Biden had signed on October 30, 2023. Where Biden's order had emphasized risk management, civil rights protections, transparency, equity, and mandatory safety disclosures prior to public deployment, EO 14179 reorients federal AI policy around speed of adoption and competitive advantage. The order also revoked related Office of Management and Budget memoranda (M-24-10 and M-24-18) that had governed federal AI procurement and governance practices.
In AI safety terms, EO 14179 represents a significant policy pivot away from precautionary frameworks. It contains no explicit treatment of alignment research, existential risk, or long-term safety concerns. Its framing of "bias" is primarily political rather than technical—the order directs agencies to develop AI systems free from what it characterizes as ideological influence, rather than addressing risks of deceptive or misaligned behavior. The order's downstream effects on AI safety work, both within the federal government and in the broader research community, remain contested.
History and Timeline
October 30, 2023: President Biden signs EO 14110, directing federal agencies to implement extensive AI safety standards, including requirements for AI developers to share safety test results with the government before public release, algorithmic discrimination protections, equity assessments, worker protections, and visa recommendations for foreign AI experts (covering O-1, EB-1, EB-2, and entrepreneur parole categories).
January 20, 2025: On his first day back in office, Trump rescinds EO 14110 as part of a broader action titled "Initial Rescissions of Harmful Executive Orders and Actions."
January 23, 2025: Trump signs EO 14179, published in the Federal Register on January 31, 2025 (90 Fed. Reg. 8741). The order formally replaces EO 14110 and directs agency heads to review, suspend, revise, or rescind any actions taken under the prior order that conflict with the new policy goals, within 180 days.
February 6, 2025: The White House issues a Request for Information (RFI) soliciting public input on the forthcoming AI Action Plan. The Office of Science and Technology Policy ultimately receives more than 10,000 public comments from academia, industry, and government stakeholders.
April 3, 2025: OMB issues two memoranda implementing EO 14179 to accelerate federal AI adoption. These memoranda set a deadline of April 3, 2026, for agencies to discontinue non-compliant high-impact AI uses.
July 22–23, 2025: The 180-day deadline passes. On July 23, President Trump signs three implementing executive orders (EO 14318, 14319, and 14320) addressing AI infrastructure permitting, "unbiased AI" principles, and AI export promotion respectively.
July 31, 2025: The White House releases the AI Action Plan, titled "Winning the Race: America's AI Action Plan," slightly past the July 22 deadline. The plan is organized around three pillars: accelerating innovation, building American AI infrastructure, and leading international diplomacy and security.
December 11, 2025: Trump signs a follow-on executive order, "Ensuring a National Policy Framework for Artificial Intelligence," which builds on EO 14179 by targeting state AI laws that the administration characterizes as conflicting with national policy, establishing an AI Litigation Task Force, and directing the Secretary of Commerce to evaluate state AI regulations within 90 days.
Key Provisions
EO 14179's operative provisions fall into two main categories: immediate revocations and forward-looking mandates.
On the revocation side, the order directs the OMB Director to revise or rescind memoranda M-24-10 and M-24-18 within 60 days, and instructs all agency heads to identify actions taken under EO 14110 that conflict with the new policy goals. Agencies are empowered to issue immediate exemptions where full rescission would take longer than 180 days.
On the mandate side, the order assigns leadership for developing an AI Action Plan to a set of White House officials—specifically the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, the Assistant to the President for National Security Affairs, the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, and the OMB Director. These officials are directed to consult with relevant agency heads and produce a comprehensive plan within 180 days.
The order also defines "artificial intelligence" by reference to 15 U.S.C. 9401(3): machine-based systems that make predictions, recommendations, or decisions for human-defined objectives.
The AI Action Plan: "Winning the Race"
The resulting action plan, released July 31, 2025, organizes its recommendations into three pillars:
Pillar 1: Accelerating Innovation covers the elimination of federal regulatory barriers to AI deployment, promoting rapid AI adoption across federal agencies, revising the NIST AI Risk Management Framework to remove references to diversity, equity, and inclusion (DEI) concepts, and directing workforce retraining programs intended to complement rather than replace human labor.
Pillar 2: Building American AI Infrastructure addresses streamlining of permitting processes, export control reform, federal AI procurement, and Department of Defense priorities including AI workflow automation, talent programs, virtual proving grounds, and priority access to cloud computing resources. DOL is directed to develop skills frameworks and apprenticeship programs for AI infrastructure roles such as electricians, HVAC technicians, and data center operators. The plan also calls for investment in manufacturing technologies for AI through existing mechanisms including SBIR, STTR, CHIPS R&D programs, and Title III of the Defense Production Act.
Pillar 3: International Diplomacy and Security focuses on centralizing federal AI governance, advancing U.S. AI leadership in international forums, and—controversially—considering a state's AI regulatory climate when allocating discretionary federal funding.
The plan also formalizes the Chief Artificial Intelligence Officer Council (CAIOC) for interagency coordination, creates a talent-exchange program for AI experts through the Office of Personnel Management, and directs the General Services Administration to develop an AI procurement toolbox.
Regarding AI evaluations, the Action Plan supports building an "AI Evaluation Ecosystem" with performance benchmarks for AI in regulated industries. This appears oriented toward practical reliability metrics rather than safety properties in the technical AI safety sense—no discussion of alignment evaluations, dangerous capability thresholds, or catastrophic risk assessment appears in available descriptions of the plan.
A key policy directive in the plan instructs agencies to procure large language models developed according to "Unbiased AI Principles," defined as systems grounded in "truth-seeking" (prioritizing historical accuracy and scientific inquiry) and being "ideologically neutral" (free from partisan influence and DEI concepts). EO 14319, signed July 23, 2025, separately requires OMB to issue guidance on these Unbiased AI Principles by approximately November 20, 2025.
Relationship to AI Safety
EO 14179 represents a marked departure from the regulatory philosophy underpinning Biden's EO 14110 in ways directly relevant to AI safety practice. Biden's order had required AI developers to share the results of safety tests with the U.S. government before public deployment—a provision that gave federal agencies visibility into frontier model capabilities. EO 14179 eliminates this requirement as part of its broader deregulatory mandate.
The order's language about "ideological bias" is distinct from technical AI safety concerns. While the AI safety research community focuses on problems such as scheming, deceptive alignment, and interpretability challenges, EO 14179's framing of bias concerns is political: the order targets AI systems it characterizes as reflecting partisan viewpoints or DEI frameworks. This represents a different conceptual register from the alignment research agenda pursued by organizations such as Anthropic, OpenAI, Machine Intelligence Research Institute (MIRI), and METR.
The Action Plan's acknowledgment of AI system failures—including hallucinations, adversarial prompts, and data poisoning—is oriented toward voluntary NIST benchmarks rather than mandatory safeguards. No discussion of existential or catastrophic risk appears in available descriptions of EO 14179 or its implementing actions.
Funding and Resource Implications
EO 14179 itself does not specify new funding appropriations. Its financial implications are primarily indirect, operating through the conditional allocation of existing federal resources rather than new spending authority.
The most concrete funding mechanism is the use of grant conditions to influence state AI regulatory behavior. The December 2025 follow-on EO directs the Secretary of Commerce to exclude states with what the administration characterizes as "onerous" AI laws from non-deployment funds under the Broadband Equity, Access, and Deployment (BEAD) program, to the maximum extent allowed under 47 U.S.C. 1702(e)-(f). No specific dollar amounts are given, but the BEAD program administers substantial federal broadband infrastructure funds.
Separately, EO 14318 (signed July 23, 2025) directs Commerce to launch a financial support initiative for AI infrastructure projects and to identify federal sites suitable for AI development, though without specifying funding levels.
The Action Plan references investment in AI manufacturing technologies through existing R&D programs across DOD, DOC, DOE, and NSF, including SBIR, STTR, CHIPS R&D, and Title III of the Defense Production Act mechanisms. A new National AI Research and Development Strategic Plan is to be led by OSTP to guide R&D funding priorities.
Criticisms and Concerns
Worker protections and equity: Labor advocates and policy analysts have criticized EO 14179 for rescinding Biden-era requirements that federal agencies assess AI systems for their impacts on workers, including protections against AI-enabled surveillance, job displacement, and algorithmic discrimination. Biden's EO 14110 had directed the Department of Labor to develop best practices on equity and fairness in AI deployment; EO 14179's revocation of these provisions drew criticism from labor organizations.
Health equity and marginalized communities: Health policy commentators have raised concerns that accelerating AI deployment with reduced regulatory oversight may exacerbate disparities for populations underrepresented in health data, given the reduced emphasis on civil rights, transparency, and fairness requirements that characterized the prior framework.
Immigration and AI talent: Biden's EO 14110 had recommended policies to facilitate O-1, EB-1, EB-2, and entrepreneur parole visa access for foreign nationals with AI expertise. The status of these provisions under EO 14179 remained unclear as agencies conducted their reviews; immigration attorneys and organizations such as AILA were monitoring developments.
State-federal tensions: The administration's characterization of state AI laws as creating a problematic "patchwork" of regulation, and its use of federal funding leverage and a DOJ litigation task force to challenge state laws, drew criticism from state governments, civil liberties organizations, and legal scholars. Critics raised concerns about both the constitutional basis for federal preemption and the potential for the approach to override state-level consumer protections, privacy laws, and worker protections. Constitutional challenges are anticipated, particularly regarding the limits of commerce clause arguments and questions about impermissible conditions on federal grants.
Safety oversight rollback: The removal of mandatory pre-deployment safety reporting requirements—which had given federal agencies direct visibility into frontier AI capabilities—was criticized by AI safety advocates as reducing the government's ability to identify and respond to emerging risks from powerful AI systems. The broader shift from risk management protocols to an industry self-regulation model contrasts with approaches in the European Union and represents a reduction in federal oversight of AI systems deployed in high-stakes applications.
Concentration of governance: The order's centralization of AI governance authority in the White House, combined with the use of litigation and funding conditions to limit state regulatory authority, raised concerns about democratic accountability and the pace at which sweeping policy changes could be implemented without congressional action. The House's failure to pass a state AI moratorium through the "One Big Beautiful Bill" legislation in mid-2025 prompted the December 2025 executive order addressing state laws through unilateral executive action.
Key Uncertainties
Several important questions about EO 14179's effects remain unresolved or contested:
- Immigration policy: The ultimate fate of Biden-era visa facilitation provisions for AI experts (O-1, EB-1, EB-2, entrepreneur parole) remains unclear following agency reviews.
- State preemption: The legal durability of the federal funding conditions and DOJ litigation strategy targeting state AI laws faces anticipated constitutional challenges; outcomes are uncertain.
- Implementation: As of early 2026, agency reviews of EO 14110-derived actions are ongoing, and no comprehensive accounting of rescinded regulations has been publicly reported.
- NIST framework revisions: The directive to revise the NIST AI Risk Management Framework to remove DEI references is underway, but the precise scope of changes and their effects on industry adoption of the framework remain to be seen.
- Safety ecosystem: The extent to which the "AI Evaluation Ecosystem" proposed in the Action Plan will address technical safety properties—versus being limited to performance metrics in regulated industries—is not yet established.