Skip to content

NIST and AI Safety

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:63 (Good)
Importance:75 (High)
Last edited:2026-02-01 (today)
Words:3.4k
Structure:
📊 2📈 0🔗 2📚 707%Score: 11/15
LLM Summary:NIST plays a central coordinating role in U.S. AI governance through voluntary standards and risk management frameworks, but faces criticism for technical focus over systemic issues and funding constraints that limit effectiveness. The agency's AI Safety Institute represents a significant institutional development for AI safety evaluation and international coordination.
Issues (1):
  • Links5 links could use <R> components
AspectAssessment
Primary RoleU.S. federal standards agency developing AI measurement tools, frameworks, and guidelines
Key InitiativeAI Risk Management Framework (AI RMF 1.0, released January 2023)
FundingFY 2025 budget request: $47.7M for AI work; $20M for MITRE AI centers (2025)
Recent DevelopmentU.S. AI Safety Institute (AISI) established under 2023 Executive Order
ApproachVoluntary, non-regulatory standards emphasizing trustworthy AI
InfluenceOver 280 organizations in NIST AI Consortium; shapes U.S. AI policy implementation

The National Institute of Standards and Technology (NIST) is a U.S. Department of Commerce agency that has become central to American artificial intelligence governance through its development of measurement standards, risk management frameworks, and safety guidelines.1 Founded in 1901 as the National Bureau of Standards, NIST’s AI work began in earnest around 2016-2018, though it has been involved in computing standards since the 1960s.2

NIST’s core AI mission focuses on promoting “trustworthy AI” through science-based standards and voluntary frameworks rather than regulation.3 The agency emphasizes that “safety breeds trust, trust enables adoption, and adoption accelerates innovation” as its guiding principle.4 This approach positions NIST as a coordinator between government, industry, and academia, creating consensus standards that organizations can voluntarily adopt.

The agency’s influence expanded significantly with the October 2023 Executive Order on Safe, Secure, and Trustworthy AI, which established the U.S. AI Safety Institute (AISI) within NIST and gave the agency new mandates for AI system evaluation, red-teaming, and international standards coordination.5 NIST’s work spans fundamental research, applied projects in manufacturing and cybersecurity, and the convening of large multi-stakeholder consortia to develop practical guidance for AI deployment.

While NIST has conducted computing research since the mid-1960s—including developing MAGIC, one of the first intelligent computer graphics terminals—its explicit focus on artificial intelligence emerged much later.6 The agency’s Information Technology Laboratory (ITL) built capabilities in cryptography, biometrics, and data processing standards through the 1990s and 2000s, establishing foundations for later AI work.7

NIST’s dedicated AI program began taking shape around 2016-2018 with the Fundamental and Applied Research and Standards for AI Technologies (FARSAIT) initiative.8 This program aimed to develop comprehensive guidance on trustworthy AI systems, including terminology, taxonomy, and measurement approaches. However, the specific timeline of NIST’s early AI involvement remains sparsely documented in public sources.

The agency’s AI work accelerated dramatically with the release of the AI Risk Management Framework (AI RMF 1.0) in January 2023, following extensive public consultation.9 This voluntary framework provided organizations with a structured approach to managing AI risks through four core functions: Govern, Map, Measure, and Manage.10

The October 30, 2023 Executive Order on Safe, Secure, and Trustworthy AI transformed NIST’s role by establishing the U.S. AI Safety Institute (AISI) within the agency.11 AISI’s strategic vision focuses on three interconnected pillars: advancing the science of AI safety through research and evaluation, disseminating safety practices to diverse stakeholders, and supporting coordination across the AI safety community.12

In February 2024, Commerce Secretary Gina Raimondo announced the inaugural leadership team, appointing Elizabeth Kelly as director and Elham Tabassi as chief technology officer.13 The team expanded in April 2024 with five additional senior leaders, including Paul Christiano (former OpenAI researcher) as Head of AI Safety and Adam Russell as Chief Vision Officer.14

The AI RMF represents NIST’s flagship contribution to AI governance. Released January 26, 2023, the framework addresses trustworthy AI attributes including validity, reliability, safety, security, accountability, transparency, privacy-enhancement, and fairness.15 On July 26, 2024, NIST released NIST AI 600-1, the Generative AI Profile, which identifies unique risks posed by generative AI systems and proposes tailored management actions.16

The framework identifies 12 risk categories organizations should address, ranging from data privacy and information security to dangerous content, harmful bias, environmental impacts, and CBRN (Chemical, Biological, Radiological, and Nuclear) information risks.17 Updated guidance released in 2025 expanded the framework to address supply chain vulnerabilities, model provenance, data integrity, and third-party risks, while introducing maturity model guidance for measuring organizational AI risk management capabilities.18

AISI operates with three core goals: advancing AI safety science through model testing and red-teaming, disseminating safety practices through guidelines and tools, and supporting stakeholder coordination.19 The institute established the AI Safety Institute Consortium (AISIC) in February 2024, bringing together over 200 members from academia, advocacy organizations, private industry, and government to develop standards collaboratively.20

In practice, AISI focuses on developing evaluation approaches for frontier AI models, conducting security assessments, and creating measurement tools. The Center for AI Standards and Innovation (CAISI), previously named the U.S. AI Safety Institute, leads evaluations of U.S. and adversary AI systems and has established voluntary agreements with multiple developers of cutting-edge AI models for collaborative research and testing.21

The NIST AI Consortium represents a major public-private partnership, with over 280 organizations from industry, academia, and civil society participating through Cooperative Research and Development Agreements (CRADAs).22 The consortium develops science-based guidelines and standards for AI measurement through open collaborative research, creating a foundation for global AI metrology.

Membership is open to organizations that can contribute expertise, products, data, or models. This approach allows NIST to leverage industry capabilities while maintaining its neutral convening role.

In March 2025, NIST launched the AI Standards “Zero Drafts” Pilot Project, an innovative approach to accelerate standards development.23 The project creates preliminary stakeholder-driven drafts on topics like AI risk management, transparency, and procurement, which are then submitted to formal standards developing organizations (SDOs) for consensus development. Organizations can provide input via aistandardszerodrafts@nist.gov.

NIST also released a Global AI Standards Engagement Plan (NIST AI 100-5) in July 2024, outlining the agency’s approach to international AI standards development.24 The plan addresses both “horizontal” (cross-sector) and “vertical” (sector-specific) standards needs, aiming to ensure scientifically sound, accessible standards globally.

NIST’s research portfolio includes applied AI work in advanced materials discovery, robotic manufacturing, wireless systems, and cybersecurity.25 The agency released Dioptra, an open-source software tool for adversarial AI testing, enabling organizations to evaluate how adversarial attacks affect AI system performance.26

In December 2025, NIST announced a $20 million partnership with MITRE Corporation to establish two research centers: the AI Economic Security Center for U.S. Manufacturing Productivity and the AI Economic Security Center to Secure U.S. Critical Infrastructure from Cyberthreats.27 These centers focus on developing technology evaluations and agentic AI tools to enhance critical infrastructure security and manufacturing competitiveness.

Additional partnerships include a $6 million center with Carnegie Mellon University for cooperative AI testing and evaluation research,28 and over $1.8 million in Small Business Innovation Research (SBIR) awards to 18 companies developing AI-related products, with Phase II funding available up to $400,000.29

On December 16, 2025, NIST released a preliminary draft of its Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile, NISTIR 8596).30 This profile maps three AI focus areas—Secure (managing AI system cybersecurity risks), Defend (using AI to enhance cybersecurity), and Thwart (defending against adversarial AI uses)—onto the six core functions of NIST’s Cybersecurity Framework 2.0.31

The profile addresses securing AI dependencies, integrating AI risks into organizational risk tolerance, deploying AI-augmented security teams, and detecting threats in supplier models. Public comments were solicited through January 30, 2026, with a full public draft planned for later in 2026.32

NIST’s AI work operates on federal appropriations, which have faced constraints despite expanded mandates. The agency’s overall annual budget is approximately $1-1.3 billion, with the President’s FY 2025 budget requesting $47.7 million specifically for AI research, testing infrastructure, risk management guidance, and frameworks.33

However, funding has consistently fallen short of requests since FY 2022. The Fiscal Responsibility Act of 2023 set discretionary spending limits through FY 2029, further constraining available resources.34 This persistent underfunding has prompted discussions about establishing an “agency-related foundation” to attract private investment for AI talent fellowships, rapid evaluations, and benchmark development, potentially bypassing federal procurement limitations.35

Recent major investments include:

  • $20 million for MITRE AI centers (manufacturing and critical infrastructure, 2025)36
  • $6 million for Carnegie Mellon AI cooperative research center (2024)37
  • $1.8 million in SBIR Phase I awards to 18 small businesses (2025)38
  • Up to $70 million over five years for an AI-focused Manufacturing USA institute (announced July 2024, awards pending)39

NIST’s AI leadership structure centers on AISI’s executive team:

RoleNameBackground
Director, AISIElizabeth KellySpecial assistant to the president for economic policy; coordinates activities across Commerce, NIST, and federal government40
Chief Technology Officer, AISIElham TabassiNIST’s chief AI advisor; leads technical programs focused on trustworthy AI; previously Chief of Staff, NIST Information Technology Laboratory41
Head of AI SafetyPaul ChristianoFormer OpenAI leader and founder of the Alignment Research Center, a nonprofit focused on AI alignment research42
Chief Vision OfficerAdam RussellDirector of AI Division at University of Southern California’s Information Sciences Institute43
Acting Chief Operating Officer and Chief of StaffMara CampbellFormer deputy COO at Commerce’s Economic Development Administration44
Senior AdvisorRob ReichProfessor of political science at Stanford University and associate director of Stanford’s Institute for Human-Centered AI45
Head of International EngagementMark LatoneroFormer deputy director of the National AI Initiative Office at the White House Office of Science and Technology Policy46

This leadership team shapes NIST’s approach to AI safety evaluation, international standards coordination, and public-private partnership development.

Civil rights groups and policy organizations have criticized NIST’s frameworks for overemphasizing technical solutions to bias while neglecting systemic and institutional factors.47 In comments on NIST’s bias management proposal, advocacy groups argued the agency’s approach is “unhelpful and dangerously idealistic” because it focuses on algorithmic fixes without addressing how humans and institutions misuse AI systems.48

NIST itself acknowledges these limitations. In a March 2022 report, the agency noted that AI bias extends beyond data quality to include human biases (such as subjective decisions in filling data gaps) and systemic biases rooted in institutional discrimination.49 Research lead Reva Schwartz emphasized the need for socio-technical approaches, stating that “purely technical efforts fall short” in managing AI bias.50

Exclusion of Critical Risks from Misuse Frameworks

Section titled “Exclusion of Critical Risks from Misuse Frameworks”

The Electronic Privacy Information Center (EPIC) criticized NIST AI 800-1 for deprioritizing bias, discrimination, hallucinations, and privacy risks in its guidance on dual-use foundation models.51 EPIC argues that threat actors exploit these very vulnerabilities for misuse, and excluding them creates dangerous blind spots. The organization recommended incorporating sociotechnical factors as required by Executive Order 14110.52

NIST’s AI security concept papers have been criticized for identifying enterprise challenges—such as opacity in model training, unclear data usage, and difficulties maintaining AI system inventories—without offering specific mitigations.53 Jeff Man, a senior information security consultant, noted visibility problems: “How do you actually, as an enterprise, gain insight into what AI is deployed, and the data it’s been trained on?”54

Experts also worry whether traditional standards can adequately address emerging risks from agentic AI systems. Vince Worthington highlighted concerns about “cascading failures” where autonomous AI agents might create compounding problems, while Vince Berk of Apprentis Ventures expressed skepticism that standards processes can keep pace with AI threat evolution.55

Research cited by NIST demonstrates concerning security implications of AI-assisted development. One study found that AI code improvements increase vulnerabilities by 37.6% after five iterations, emphasizing the critical need for human oversight.56 NIST acknowledges that AI accelerates various attack vectors including phishing, data poisoning, and coordinated campaigns by autonomous agents.57

The Institute for Security and Technology (IST) suggested NIST should treat AI models themselves as potential insider threats, where autonomous agents might self-evolve or collude to bypass security controls.58

Organizations implementing the AI RMF face practical difficulties including efficiency losses, high resource and expertise demands, incomplete market adoption, and risk of over-complexity.59 The voluntary nature of NIST’s frameworks means adoption varies widely, and many organizations lack the specialized knowledge needed to implement guidance effectively.

Additionally, some experts worry about AI systems undermining the standards development process itself. Erik Avakian of Info-Tech Research Group warned about AI-generated comments flooding NIST’s public input processes, potentially drowning out legitimate stakeholder feedback.60

NIST’s influence on AI governance stems primarily from its convening power and standard-setting authority rather than regulatory enforcement. The agency’s frameworks have been widely referenced in policy discussions and adopted by organizations seeking to demonstrate responsible AI practices.

The AI RMF has become a de facto benchmark for AI risk management in the United States, with hundreds of organizations using it to structure their governance approaches.61 The framework’s alignment with NIST’s established Cybersecurity Framework and Privacy Framework allows organizations to integrate AI governance into existing risk management processes.62

NIST’s partnership model has proven effective at engaging major institutions. The AISI Consortium’s 200+ members and the AI Consortium’s 280+ organizations represent significant industry buy-in.63 Voluntary agreements with frontier AI model developers enable NIST to conduct evaluations and testing that would otherwise be impossible for a government agency with limited resources.64

However, the effectiveness of NIST’s work faces limitations. The $20 million investment in MITRE AI centers and $6 million Carnegie Mellon partnership, while substantial, remain modest relative to private sector AI investment.65 The agency’s persistent underfunding constrains its ability to conduct cutting-edge research, attract top talent, and rapidly develop new standards as AI capabilities advance.66

Acting NIST Director Craig Burkhardt has emphasized the agency’s goal to “remove barriers to American AI innovation and accelerate the application of our AI technologies around the world” while strengthening U.S. manufacturing competitiveness and critical infrastructure security.67 Whether NIST can achieve these ambitious aims with current resources remains uncertain.

NIST participates actively in international AI governance efforts through multiple channels. The agency engages with the Organisation for Economic Co-operation and Development (OECD), the Quadrilateral Security Dialogue, and bilateral initiatives across Asia, Europe, the Middle East, and North America, often partnering with the U.S. Department of State and International Trade Administration.68

The July 2024 Global AI Standards Engagement Plan (NIST AI 100-5) outlines NIST’s strategy for promoting scientifically sound, accessible standards in international forums.69 This work aims to ensure U.S. technical approaches shape global AI standards development rather than being shaped by standards developed elsewhere.

NIST’s international role includes contributing to AI safety institutes established by other countries. The agency coordinates with counterpart organizations in the United Kingdom, European Union, and other nations working on AI evaluation and safety testing.70

Several important questions remain about NIST’s AI work:

  1. Resource Sufficiency: Can NIST effectively fulfill its expanded AI mandate given persistent funding constraints and the Fiscal Responsibility Act’s spending limits through 2029?

  2. Standards Pace: Will voluntary standards development keep pace with rapid AI capability advances, particularly for agentic systems and novel architectures beyond current paradigms?

  3. International Influence: To what extent will NIST’s technical approaches shape global AI standards versus being influenced by standards developed in other jurisdictions with different governance philosophies?

  4. Voluntary Adoption: How widely will organizations adopt NIST’s voluntary frameworks, and will voluntary adoption prove sufficient to manage AI risks, or will future regulation mandate compliance?

  5. Evaluation Capabilities: Can NIST develop evaluation methods that effectively assess frontier AI systems’ safety properties, especially for emergent capabilities and long-horizon risks?

  6. Private Sector Relationship: How will NIST’s relationships with frontier AI developers evolve as commercial pressures potentially conflict with safety evaluation transparency?

  7. Technical vs. Governance Balance: Will NIST successfully integrate sociotechnical considerations into its frameworks despite its historical focus on technical measurement and standards?

The answers to these questions will significantly shape NIST’s effectiveness as a central coordinator of U.S. AI safety and governance efforts in coming years.

  1. NIST Artificial Intelligence Overview

  2. NIST Timeline

  3. NIST AI Risk Management Framework

  4. AISI Strategic Vision Document

  5. NIST News: Commerce Secretary Announces AISI Leadership

  6. NIST ITL About ITL - ITL History Timeline

  7. NIST Cyber History

  8. NIST Artificial Intelligence - AI Research

  9. NIST AI RMF - Palo Alto Networks Cyberpedia

  10. NIST AI RMF - AuditBoard Blog

  11. Tech Policy Press: Unpacking New NIST Guidance on AI

  12. AISI Strategic Vision Document

  13. NIST News: Commerce Secretary Announces AISI Leadership

  14. Nextgov: NIST Adds 5 New Members to AI Safety Institute

  15. NIST AI RMF - Palo Alto Networks Cyberpedia

  16. NIST AI Risk Management Framework

  17. NIST AI RMF - Palo Alto Networks Cyberpedia

  18. ISPartners: NIST AI RMF 2025 Updates

  19. AISI Strategic Vision Document

  20. Husch Blackwell: NIST Introduces AI Safety Institute Leaders and Consortium

  21. NIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure

  22. NIST AI Consortium

  23. ANSI News: NIST Launches Pilot Project to Propel AI Innovation

  24. NIST AI 100-5: A Plan for Global Engagement on AI Standards (PDF)

  25. NIST Artificial Intelligence - AI Research

  26. King & Spalding: NIST Releases Series of AI Guidelines, Software

  27. NIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure

  28. CMU News: NIST Awards $6M to Carnegie Mellon University

  29. NIST News: NIST Awards Over $1.8 Million to Small Businesses

  30. JD Supra: AI Risk Meets Cyber Governance - NIST’s Cybersecurity Framework Profile

  31. Inside Privacy: NIST Publishes Preliminary Draft of Cybersecurity Framework Profile for AI

  32. Crowell: NIST Releases Draft Framework for AI Cybersecurity

  33. Ropes Data Philes: A Very Merry NISTmas - 2024 Updates

  34. FAS: NIST Foundation

  35. FAS: NIST Foundation

  36. NIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure

  37. CMU News: NIST Awards $6M to Carnegie Mellon University

  38. NIST News: NIST Awards Over $1.8 Million to Small Businesses

  39. NIST News: NIST Announces Funding Opportunity for AI-Focused Manufacturing USA Institute

  40. Husch Blackwell: NIST Introduces AI Safety Institute Leaders and Consortium

  41. Digital.gov: Overview of NIST Initiatives on AI Standards

  42. Nextgov: NIST Adds 5 New Members to AI Safety Institute

  43. Nextgov: NIST Adds 5 New Members to AI Safety Institute

  44. Nextgov: NIST Adds 5 New Members to AI Safety Institute

  45. Nextgov: NIST Adds 5 New Members to AI Safety Institute

  46. Nextgov: NIST Adds 5 New Members to AI Safety Institute

  47. NIST: Comments Received on Proposal for Identifying and Managing Bias in AI

  48. NIST: Comments Received on Proposal for Identifying and Managing Bias in AI

  49. NIST News: There’s More to AI Bias Than Biased Data

  50. NIST News: There’s More to AI Bias Than Biased Data

  51. EPIC: Comments to NIST on Managing the Risks of Misuse with AI Foundation Models

  52. EPIC: Comments to NIST on Managing the Risks of Misuse with AI Foundation Models

  53. CSO Online: NIST’s Attempts to Secure AI Yields Many Questions, No Answers

  54. CSO Online: NIST’s Attempts to Secure AI Yields Many Questions, No Answers

  55. CSO Online: NIST’s Attempts to Secure AI Yields Many Questions, No Answers

  56. Nextgov: Artificial Intelligence Friend, Foe, or Frenemy - NIST Wants to Find Out

  57. Nextgov: Artificial Intelligence Friend, Foe, or Frenemy - NIST Wants to Find Out

  58. IST: Managing Misuse - IST Submits Comments

  59. Lumenova AI: Pros and Cons of Implementing the NIST AI RMF

  60. CSO Online: NIST’s Attempts to Secure AI Yields Many Questions, No Answers

  61. Future of Life Institute: NIST

  62. ISPartners: NIST AI RMF 2025 Updates

  63. Husch Blackwell: NIST Introduces AI Safety Institute Leaders and Consortium

  64. NIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure

  65. Industrial Cyber: NIST, MITRE Invest $20 Million in AI Centers

  66. FAS: NIST Foundation

  67. NIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure

  68. NIST: Technical Contributions to AI Governance

  69. NIST AI 100-5: A Plan for Global Engagement on AI Standards (PDF)

  70. NIST: Technical Contributions to AI Governance