NIST and AI Safety
- Links5 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Aspect | Assessment |
|---|---|
| Primary Role | U.S. federal standards agency developing AI measurement tools, frameworks, and guidelines |
| Key Initiative | AI Risk Management Framework (AI RMF 1.0, released January 2023) |
| Funding | FY 2025 budget request: $47.7M for AI work; $20M for MITRE AI centers (2025) |
| Recent Development | U.S. AI Safety Institute (AISI) established under 2023 Executive Order |
| Approach | Voluntary, non-regulatory standards emphasizing trustworthy AI |
| Influence | Over 280 organizations in NIST AI Consortium; shapes U.S. AI policy implementation |
Overview
Section titled “Overview”The National Institute of Standards and Technology (NIST) is a U.S. Department of Commerce agency that has become central to American artificial intelligence governance through its development of measurement standards, risk management frameworks, and safety guidelines.1 Founded in 1901 as the National Bureau of Standards, NIST’s AI work began in earnest around 2016-2018, though it has been involved in computing standards since the 1960s.2
NIST’s core AI mission focuses on promoting “trustworthy AI” through science-based standards and voluntary frameworks rather than regulation.3 The agency emphasizes that “safety breeds trust, trust enables adoption, and adoption accelerates innovation” as its guiding principle.4 This approach positions NIST as a coordinator between government, industry, and academia, creating consensus standards that organizations can voluntarily adopt.
The agency’s influence expanded significantly with the October 2023 Executive Order on Safe, Secure, and Trustworthy AI, which established the U.S. AI Safety Institute (AISI) within NIST and gave the agency new mandates for AI system evaluation, red-teaming, and international standards coordination.5 NIST’s work spans fundamental research, applied projects in manufacturing and cybersecurity, and the convening of large multi-stakeholder consortia to develop practical guidance for AI deployment.
History and Evolution
Section titled “History and Evolution”Early Computing Work
Section titled “Early Computing Work”While NIST has conducted computing research since the mid-1960s—including developing MAGIC, one of the first intelligent computer graphics terminals—its explicit focus on artificial intelligence emerged much later.6 The agency’s Information Technology Laboratory (ITL) built capabilities in cryptography, biometrics, and data processing standards through the 1990s and 2000s, establishing foundations for later AI work.7
Emergence of AI Focus (2016-2023)
Section titled “Emergence of AI Focus (2016-2023)”NIST’s dedicated AI program began taking shape around 2016-2018 with the Fundamental and Applied Research and Standards for AI Technologies (FARSAIT) initiative.8 This program aimed to develop comprehensive guidance on trustworthy AI systems, including terminology, taxonomy, and measurement approaches. However, the specific timeline of NIST’s early AI involvement remains sparsely documented in public sources.
The agency’s AI work accelerated dramatically with the release of the AI Risk Management Framework (AI RMF 1.0) in January 2023, following extensive public consultation.9 This voluntary framework provided organizations with a structured approach to managing AI risks through four core functions: Govern, Map, Measure, and Manage.10
AI Safety Institute Era (2023-Present)
Section titled “AI Safety Institute Era (2023-Present)”The October 30, 2023 Executive Order on Safe, Secure, and Trustworthy AI transformed NIST’s role by establishing the U.S. AI Safety Institute (AISI) within the agency.11 AISI’s strategic vision focuses on three interconnected pillars: advancing the science of AI safety through research and evaluation, disseminating safety practices to diverse stakeholders, and supporting coordination across the AI safety community.12
In February 2024, Commerce Secretary Gina Raimondo announced the inaugural leadership team, appointing Elizabeth Kelly as director and Elham Tabassi as chief technology officer.13 The team expanded in April 2024 with five additional senior leaders, including Paul Christiano (former OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100 researcher) as Head of AI Safety and Adam Russell as Chief Vision Officer.14
Major Programs and Initiatives
Section titled “Major Programs and Initiatives”AI Risk Management Framework
Section titled “AI Risk Management Framework”The AI RMF represents NIST’s flagship contribution to AI governance. Released January 26, 2023, the framework addresses trustworthy AI attributes including validity, reliability, safety, security, accountability, transparency, privacy-enhancement, and fairness.15 On July 26, 2024, NIST released NIST AI 600-1, the Generative AI Profile, which identifies unique risks posed by generative AI systems and proposes tailored management actions.16
The framework identifies 12 risk categories organizations should address, ranging from data privacy and information security to dangerous content, harmful bias, environmental impacts, and CBRN (Chemical, Biological, Radiological, and Nuclear) information risks.17 Updated guidance released in 2025 expanded the framework to address supply chain vulnerabilities, model provenance, data integrity, and third-party risks, while introducing maturity model guidance for measuring organizational AI risk management capabilities.18
U.S. AI Safety Institute (AISI)
Section titled “U.S. AI Safety Institute (AISI)”AISI operates with three core goals: advancing AI safety science through model testing and red-teaming, disseminating safety practices through guidelines and tools, and supporting stakeholder coordination.19 The institute established the AI Safety Institute Consortium (AISIC) in February 2024, bringing together over 200 members from academia, advocacy organizations, private industry, and government to develop standards collaboratively.20
In practice, AISI focuses on developing evaluation approaches for frontier AI models, conducting security assessments, and creating measurement tools. The Center for AI Standards and Innovation (CAISI), previously named the U.S. AI Safety Institute, leads evaluations of U.S. and adversary AI systems and has established voluntary agreements with multiple developers of cutting-edge AI models for collaborative research and testing.21
NIST AI Consortium
Section titled “NIST AI Consortium”The NIST AI Consortium represents a major public-private partnership, with over 280 organizations from industry, academia, and civil society participating through Cooperative Research and Development Agreements (CRADAs).22 The consortium develops science-based guidelines and standards for AI measurement through open collaborative research, creating a foundation for global AI metrology.
Membership is open to organizations that can contribute expertise, products, data, or models. This approach allows NIST to leverage industry capabilities while maintaining its neutral convening role.
Standards Development Initiatives
Section titled “Standards Development Initiatives”In March 2025, NIST launched the AI Standards “Zero Drafts” Pilot Project, an innovative approach to accelerate standards development.23 The project creates preliminary stakeholder-driven drafts on topics like AI risk management, transparency, and procurement, which are then submitted to formal standards developing organizations (SDOs) for consensus development. Organizations can provide input via aistandardszerodrafts@nist.gov.
NIST also released a Global AI Standards Engagement Plan (NIST AI 100-5) in July 2024, outlining the agency’s approach to international AI standards development.24 The plan addresses both “horizontal” (cross-sector) and “vertical” (sector-specific) standards needs, aiming to ensure scientifically sound, accessible standards globally.
Research and Testing Infrastructure
Section titled “Research and Testing Infrastructure”NIST’s research portfolio includes applied AI work in advanced materials discovery, robotic manufacturing, wireless systems, and cybersecurity.25 The agency released Dioptra, an open-source software tool for adversarial AI testing, enabling organizations to evaluate how adversarial attacks affect AI system performance.26
In December 2025, NIST announced a $20 million partnership with MITRE Corporation to establish two research centers: the AI Economic Security Center for U.S. Manufacturing Productivity and the AI Economic Security Center to Secure U.S. Critical Infrastructure from Cyberthreats.27 These centers focus on developing technology evaluations and agentic AI tools to enhance critical infrastructure security and manufacturing competitiveness.
Additional partnerships include a $6 million center with Carnegie Mellon University for cooperative AI testing and evaluation research,28 and over $1.8 million in Small Business Innovation Research (SBIR) awards to 18 companies developing AI-related products, with Phase II funding available up to $400,000.29
Cybersecurity and AI Integration
Section titled “Cybersecurity and AI Integration”On December 16, 2025, NIST released a preliminary draft of its Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile, NISTIR 8596).30 This profile maps three AI focus areas—Secure (managing AI system cybersecurity risks), Defend (using AI to enhance cybersecurity), and Thwart (defending against adversarial AI uses)—onto the six core functions of NIST’s Cybersecurity Framework 2.0.31
The profile addresses securing AI dependencies, integrating AI risks into organizational risk tolerance, deploying AI-augmented security teams, and detecting threats in supplier models. Public comments were solicited through January 30, 2026, with a full public draft planned for later in 2026.32
Funding and Resources
Section titled “Funding and Resources”NIST’s AI work operates on federal appropriations, which have faced constraints despite expanded mandates. The agency’s overall annual budget is approximately $1-1.3 billion, with the President’s FY 2025 budget requesting $47.7 million specifically for AI research, testing infrastructure, risk management guidance, and frameworks.33
However, funding has consistently fallen short of requests since FY 2022. The Fiscal Responsibility Act of 2023 set discretionary spending limits through FY 2029, further constraining available resources.34 This persistent underfunding has prompted discussions about establishing an “agency-related foundation” to attract private investment for AI talent fellowships, rapid evaluations, and benchmark development, potentially bypassing federal procurement limitations.35
Recent major investments include:
- $20 million for MITRE AI centers (manufacturing and critical infrastructure, 2025)36
- $6 million for Carnegie Mellon AI cooperative research center (2024)37
- $1.8 million in SBIR Phase I awards to 18 small businesses (2025)38
- Up to $70 million over five years for an AI-focused Manufacturing USA institute (announced July 2024, awards pending)39
Leadership and Key Personnel
Section titled “Leadership and Key Personnel”NIST’s AI leadership structure centers on AISI’s executive team:
| Role | Name | Background |
|---|---|---|
| Director, AISI | Elizabeth Kelly | Special assistant to the president for economic policy; coordinates activities across Commerce, NIST, and federal government40 |
| Chief Technology Officer, AISI | Elham Tabassi | NIST’s chief AI advisor; leads technical programs focused on trustworthy AI; previously Chief of Staff, NIST Information Technology Laboratory41 |
| Head of AI Safety | Paul Christiano | Former OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100 leader and founder of the Alignment Research Center, a nonprofit focused on AI alignment research42 |
| Chief Vision Officer | Adam Russell | Director of AI Division at University of Southern California’s Information Sciences Institute43 |
| Acting Chief Operating Officer and Chief of Staff | Mara Campbell | Former deputy COO at Commerce’s Economic Development Administration44 |
| Senior Advisor | Rob Reich | Professor of political science at Stanford University and associate director of Stanford’s Institute for Human-Centered AI45 |
| Head of International Engagement | Mark Latonero | Former deputy director of the National AI Initiative Office at the White House Office of Science and Technology Policy46 |
This leadership team shapes NIST’s approach to AI safety evaluation, international standards coordination, and public-private partnership development.
Criticisms and Controversies
Section titled “Criticisms and Controversies”Technical Focus vs. Systemic Factors
Section titled “Technical Focus vs. Systemic Factors”Civil rights groups and policy organizations have criticized NIST’s frameworks for overemphasizing technical solutions to bias while neglecting systemic and institutional factors.47 In comments on NIST’s bias management proposal, advocacy groups argued the agency’s approach is “unhelpful and dangerously idealistic” because it focuses on algorithmic fixes without addressing how humans and institutions misuse AI systems.48
NIST itself acknowledges these limitations. In a March 2022 report, the agency noted that AI bias extends beyond data quality to include human biases (such as subjective decisions in filling data gaps) and systemic biases rooted in institutional discrimination.49 Research lead Reva Schwartz emphasized the need for socio-technical approaches, stating that “purely technical efforts fall short” in managing AI bias.50
Exclusion of Critical Risks from Misuse Frameworks
Section titled “Exclusion of Critical Risks from Misuse Frameworks”The Electronic Privacy Information Center (EPIC) criticized NIST AI 800-1 for deprioritizing bias, discrimination, hallucinations, and privacy risks in its guidance on dual-use foundation models.51 EPIC argues that threat actors exploit these very vulnerabilities for misuse, and excluding them creates dangerous blind spots. The organization recommended incorporating sociotechnical factors as required by Executive Order 14110.52
Lack of Concrete Solutions
Section titled “Lack of Concrete Solutions”NIST’s AI security concept papers have been criticized for identifying enterprise challenges—such as opacity in model training, unclear data usage, and difficulties maintaining AI system inventories—without offering specific mitigations.53 Jeff Man, a senior information security consultant, noted visibility problems: “How do you actually, as an enterprise, gain insight into what AI is deployed, and the data it’s been trained on?”54
Experts also worry whether traditional standards can adequately address emerging risks from agentic AI systems. Vince Worthington highlighted concerns about “cascading failures” where autonomous AI agents might create compounding problems, while Vince Berk of Apprentis Ventures expressed skepticism that standards processes can keep pace with AI threat evolution.55
AI-Enhanced Security Vulnerabilities
Section titled “AI-Enhanced Security Vulnerabilities”Research cited by NIST demonstrates concerning security implications of AI-assisted development. One study found that AI code improvements increase vulnerabilities by 37.6% after five iterations, emphasizing the critical need for human oversight.56 NIST acknowledges that AI accelerates various attack vectors including phishing, data poisoning, and coordinated campaigns by autonomous agents.57
The Institute for Security and Technology (IST) suggested NIST should treat AI models themselves as potential insider threats, where autonomous agents might self-evolve or collude to bypass security controls.58
Implementation Challenges
Section titled “Implementation Challenges”Organizations implementing the AI RMF face practical difficulties including efficiency losses, high resource and expertise demands, incomplete market adoption, and risk of over-complexity.59 The voluntary nature of NIST’s frameworks means adoption varies widely, and many organizations lack the specialized knowledge needed to implement guidance effectively.
Additionally, some experts worry about AI systems undermining the standards development process itself. Erik Avakian of Info-Tech Research Group warned about AI-generated comments flooding NIST’s public input processes, potentially drowning out legitimate stakeholder feedback.60
Impact and Effectiveness
Section titled “Impact and Effectiveness”NIST’s influence on AI governance stems primarily from its convening power and standard-setting authority rather than regulatory enforcement. The agency’s frameworks have been widely referenced in policy discussions and adopted by organizations seeking to demonstrate responsible AI practices.
The AI RMF has become a de facto benchmark for AI risk management in the United States, with hundreds of organizations using it to structure their governance approaches.61 The framework’s alignment with NIST’s established Cybersecurity Framework and Privacy Framework allows organizations to integrate AI governance into existing risk management processes.62
NIST’s partnership model has proven effective at engaging major institutions. The AISI Consortium’s 200+ members and the AI Consortium’s 280+ organizations represent significant industry buy-in.63 Voluntary agreements with frontier AI model developers enable NIST to conduct evaluations and testing that would otherwise be impossible for a government agency with limited resources.64
However, the effectiveness of NIST’s work faces limitations. The $20 million investment in MITRE AI centers and $6 million Carnegie Mellon partnership, while substantial, remain modest relative to private sector AI investment.65 The agency’s persistent underfunding constrains its ability to conduct cutting-edge research, attract top talent, and rapidly develop new standards as AI capabilities advance.66
Acting NIST Director Craig Burkhardt has emphasized the agency’s goal to “remove barriers to American AI innovation and accelerate the application of our AI technologies around the world” while strengthening U.S. manufacturing competitiveness and critical infrastructure security.67 Whether NIST can achieve these ambitious aims with current resources remains uncertain.
International Engagement
Section titled “International Engagement”NIST participates actively in international AI governance efforts through multiple channels. The agency engages with the Organisation for Economic Co-operation and Development (OECD), the Quadrilateral Security Dialogue, and bilateral initiatives across Asia, Europe, the Middle East, and North America, often partnering with the U.S. Department of State and International Trade Administration.68
The July 2024 Global AI Standards Engagement Plan (NIST AI 100-5) outlines NIST’s strategy for promoting scientifically sound, accessible standards in international forums.69 This work aims to ensure U.S. technical approaches shape global AI standards development rather than being shaped by standards developed elsewhere.
NIST’s international role includes contributing to AI safety institutes established by other countries. The agency coordinates with counterpart organizations in the United Kingdom, European Union, and other nations working on AI evaluation and safety testing.70
Key Uncertainties
Section titled “Key Uncertainties”Several important questions remain about NIST’s AI work:
-
Resource Sufficiency: Can NIST effectively fulfill its expanded AI mandate given persistent funding constraints and the Fiscal Responsibility Act’s spending limits through 2029?
-
Standards Pace: Will voluntary standards development keep pace with rapid AI capability advances, particularly for agentic systems and novel architectures beyond current paradigms?
-
International Influence: To what extent will NIST’s technical approaches shape global AI standards versus being influenced by standards developed in other jurisdictions with different governance philosophies?
-
Voluntary Adoption: How widely will organizations adopt NIST’s voluntary frameworks, and will voluntary adoption prove sufficient to manage AI risks, or will future regulation mandate compliance?
-
Evaluation Capabilities: Can NIST develop evaluation methods that effectively assess frontier AI systems’ safety properties, especially for emergent capabilities and long-horizon risks?
-
Private Sector Relationship: How will NIST’s relationships with frontier AI developers evolve as commercial pressures potentially conflict with safety evaluation transparency?
-
Technical vs. Governance Balance: Will NIST successfully integrate sociotechnical considerations into its frameworks despite its historical focus on technical measurement and standards?
The answers to these questions will significantly shape NIST’s effectiveness as a central coordinator of U.S. AI safety and governance efforts in coming years.
Sources
Section titled “Sources”Footnotes
Section titled “Footnotes”-
Husch Blackwell: NIST Introduces AI Safety Institute Leaders and Consortium ↩
-
NIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure ↩
-
ANSI News: NIST Launches Pilot Project to Propel AI Innovation ↩
-
NIST AI 100-5: A Plan for Global Engagement on AI Standards (PDF) ↩
-
King & Spalding: NIST Releases Series of AI Guidelines, Software ↩
-
NIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure ↩
-
NIST News: NIST Awards Over $1.8 Million to Small Businesses ↩
-
JD Supra: AI Risk Meets Cyber Governance - NIST’s Cybersecurity Framework Profile ↩
-
Inside Privacy: NIST Publishes Preliminary Draft of Cybersecurity Framework Profile for AI ↩
-
Crowell: NIST Releases Draft Framework for AI Cybersecurity ↩
-
NIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure ↩
-
NIST News: NIST Awards Over $1.8 Million to Small Businesses ↩
-
NIST News: NIST Announces Funding Opportunity for AI-Focused Manufacturing USA Institute ↩
-
Husch Blackwell: NIST Introduces AI Safety Institute Leaders and Consortium ↩
-
NIST: Comments Received on Proposal for Identifying and Managing Bias in AI ↩
-
NIST: Comments Received on Proposal for Identifying and Managing Bias in AI ↩
-
EPIC: Comments to NIST on Managing the Risks of Misuse with AI Foundation Models ↩
-
EPIC: Comments to NIST on Managing the Risks of Misuse with AI Foundation Models ↩
-
CSO Online: NIST’s Attempts to Secure AI Yields Many Questions, No Answers ↩
-
CSO Online: NIST’s Attempts to Secure AI Yields Many Questions, No Answers ↩
-
CSO Online: NIST’s Attempts to Secure AI Yields Many Questions, No Answers ↩
-
Nextgov: Artificial Intelligence Friend, Foe, or Frenemy - NIST Wants to Find Out ↩
-
Nextgov: Artificial Intelligence Friend, Foe, or Frenemy - NIST Wants to Find Out ↩
-
Lumenova AI: Pros and Cons of Implementing the NIST AI RMF ↩
-
CSO Online: NIST’s Attempts to Secure AI Yields Many Questions, No Answers ↩
-
Husch Blackwell: NIST Introduces AI Safety Institute Leaders and Consortium ↩
-
NIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure ↩
-
Industrial Cyber: NIST, MITRE Invest $20 Million in AI Centers ↩
-
NIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure ↩
-
NIST AI 100-5: A Plan for Global Engagement on AI Standards (PDF) ↩