Longterm Wiki
Navigation
Updated 2026-03-12HistoryData
Citations verified55 accurate2 flagged8 unchecked
Page StatusContent
Edited 1 day ago2.8k words1 backlinksUpdated every 3 weeksDue in 3 weeks
63QualityGood76.5ImportanceHigh49ResearchLow
Summary

NIST plays a central coordinating role in U.S. AI governance through voluntary standards and risk management frameworks, but faces criticism for technical focus over systemic issues and funding constraints that limit effectiveness. The agency's AI Safety Institute represents a significant institutional development for AI safety evaluation and international coordination.

Content7/13
LLM summaryScheduleEntityEdit historyOverview
Tables3/ ~11Diagrams0/ ~1Int. links12/ ~22Ext. links1/ ~14Footnotes0/ ~8References36/ ~8Quotes60/68Accuracy57/68RatingsN:4 R:7 A:6 C:8Backlinks1
Issues1
Links1 link could use <R> components

NIST and AI Safety

Government

NIST and AI Safety

NIST plays a central coordinating role in U.S. AI governance through voluntary standards and risk management frameworks, but faces criticism for technical focus over systemic issues and funding constraints that limit effectiveness. The agency's AI Safety Institute represents a significant institutional development for AI safety evaluation and international coordination.

TypeGovernment
Related
People
Paul Christiano
Organizations
OpenAI
2.8k words · 1 backlinks

Quick Assessment

DimensionAssessment
Primary RoleU.S. federal standards agency developing AI measurement tools, frameworks, and guidelines
Key InitiativeAI Risk Management Framework (AI RMF 1.0, released January 2023)
FundingFY 2025 budget request: $47.7M for AI work; $20M for MITRE AI centers (2025)
Recent DevelopmentU.S. AI Safety Institute (AISI) established under 2023 Executive Order
ApproachVoluntary, non-regulatory standards emphasizing trustworthy AI
InfluenceOver 280 organizations in NIST AI Consortium; shapes U.S. AI policy implementation
SourceLink
Official Websitenist.gov

Overview

The National Institute of Standards and Technology (NIST) is a U.S. Department of Commerce agency that has become central to American artificial intelligence governance through its development of measurement standards, risk management frameworks, and safety guidelines.1 Founded in 1901 as the National Bureau of Standards, NIST's AI work began in earnest around 2016-2018, though it has been involved in computing standards since the 1960s.2

NIST's core AI mission focuses on promoting "trustworthy AI" through science-based standards and voluntary frameworks rather than regulation.3 The agency emphasizes that "safety breeds trust, trust enables adoption, and adoption accelerates innovation" as its guiding principle.4 This approach positions NIST as a coordinator between government, industry, and academia, creating consensus standards that organizations can voluntarily adopt.

The agency's influence expanded significantly with the October 2023 Executive Order on Safe, Secure, and Trustworthy AI, which established the U.S. AI Safety Institute (AISI) within NIST and gave the agency new mandates for AI system evaluation, red-teaming, and international standards coordination.5 NIST's work spans fundamental research, applied projects in manufacturing and cybersecurity, and the convening of large multi-stakeholder consortia to develop practical guidance for AI deployment.

History and Evolution

Early Computing Work

While NIST has conducted computing research since the mid-1960s—including developing MAGIC, one of the first intelligent computer graphics terminals—its explicit focus on artificial intelligence emerged much later.6 The agency's Information Technology Laboratory (ITL) built capabilities in cryptography, biometrics, and data processing standards through the 1990s and 2000s, establishing foundations for later AI work.7

Emergence of AI Focus (2016-2023)

NIST's dedicated AI program began taking shape around 2016-2018 with the Fundamental and Applied Research and Standards for AI Technologies (FARSAIT) initiative.8 This program aimed to develop comprehensive guidance on trustworthy AI systems, including terminology, taxonomy, and measurement approaches. However, the specific timeline of NIST's early AI involvement remains sparsely documented in public sources.

The agency's AI work accelerated dramatically with the release of the AI Risk Management Framework (AI RMF 1.0) in January 2023, following extensive public consultation.9 This voluntary framework provided organizations with a structured approach to managing AI risks through four core functions: Govern, Map, Measure, and Manage.10

AI Safety Institute Era (2023-Present)

The October 30, 2023 Executive Order on Safe, Secure, and Trustworthy AI transformed NIST's role by establishing the U.S. AI Safety Institute (AISI) within the agency.11 AISI published an initial draft report for Managing the Risk of Misuse for Dual-Use Foundation Models (AI 800-1).12

In February 2024, Commerce Secretary Gina Raimondo announced the inaugural leadership team, appointing Elizabeth Kelly as director and Elham Tabassi as chief technology officer.13 The team expanded with five additional senior leaders, including Paul Christiano (former OpenAI leader and founder of the nonprofit Alignment Research Center) as Head of AI Safety, Adam Russell (director of the Information Sciences Institute's AI Division at the University of Southern California) as Chief Vision Officer, Mara Campbell as acting chief operating officer and chief of staff, Rob Reich as senior advisor, and Mark Latonero as head of international engagement.14

Major Programs and Initiatives

AI Risk Management Framework

The AI RMF represents NIST's flagship contribution to AI governance. Initiated in 2021 and released in January 2023, the framework addresses trustworthy AI attributes including validity, reliability, safety, security, and resilience.15 On July 26, 2024, NIST released NIST AI 600-1, the Generative AI Profile, which identifies unique risks posed by generative AI systems and proposes tailored management actions for organizations based on their goals and priorities.16

At its core, the AI RMF is built on four functions: Govern, Map, Measure, and Manage, providing a systematic approach to improving the robustness and reliability of AI systems.17 Updated guidance released in 2025 expanded the framework to address supply chain vulnerabilities, model provenance, data integrity, and third-party risks, while introducing maturity model guidance for measuring organizational AI risk management capabilities.18

U.S. AI Safety Institute (AISI)

AISI operates with three core goals: advancing AI safety science through model testing and red-teaming, disseminating safety practices through guidelines and tools, and supporting stakeholder coordination.19 The institute established the AI Safety Institute Consortium (AISIC) in February 2024, bringing together over 200 members from academia, advocacy organizations, private industry, and government to develop standards collaboratively.20

In practice, AISI focuses on developing evaluation approaches for frontier AI models, conducting security assessments, and creating measurement tools. The Center for AI Standards and Innovation (CAISI), previously named the U.S. AI Safety Institute, leads evaluations of U.S. and adversary AI systems and has established voluntary agreements with multiple developers of cutting-edge AI models for collaborative research and testing.21

NIST AI Consortium

The NIST AI Consortium represents a major public-private partnership, with over 280 organizations from industry, academia, and civil society participating through Cooperative Research and Development Agreements (CRADAs).22 The consortium develops science-based guidelines and standards for AI measurement through open collaborative research, creating a foundation for global AI metrology.

Membership is open to organizations that can contribute expertise, products, data, or models. This approach allows NIST to leverage industry capabilities while maintaining its neutral convening role.

Standards Development Initiatives

In March 2025, NIST launched the AI Standards "Zero Drafts" Pilot Project, an innovative approach to accelerate standards development.23 The project creates preliminary stakeholder-driven drafts on topics like AI risk management, transparency, and procurement, which are then submitted to formal standards developing organizations (SDOs) for consensus development. Organizations can provide input via aistandardszerodrafts@nist.gov.

NIST also released a Global AI Standards Engagement Plan (NIST AI 100-5) in July 2024, outlining the agency's approach to international AI standards development.24 The plan addresses both "horizontal" (cross-sector) and "vertical" (sector-specific) standards needs, aiming to ensure scientifically sound, accessible standards globally.

Research and Testing Infrastructure

NIST's research portfolio includes applied AI work in advanced materials discovery, robotic manufacturing, wireless systems, and cybersecurity.25 The agency released Dioptra, an open-source software tool for adversarial AI testing, enabling organizations to evaluate how adversarial attacks affect AI system performance.26

In December 2025, NIST announced a $20 million partnership with MITRE Corporation to establish two research centers: the AI Economic Security Center for U.S. Manufacturing Productivity and the AI Economic Security Center to Secure U.S. Critical Infrastructure from Cyberthreats.27 These centers focus on developing technology evaluations and agentic AI tools to enhance critical infrastructure security and manufacturing competitiveness.

Additional partnerships include a $6 million center with Carnegie Mellon University for cooperative AI testing and evaluation research,28 and over $1.8 million in Small Business Innovation Research (SBIR) awards to 18 companies developing AI-related products, with Phase II funding available up to $400,000.29

Cybersecurity and AI Integration

On December 16, 2025, NIST released a preliminary draft of its Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile, NISTIR 8596).30 This profile maps three AI focus areas—Secure (managing AI system cybersecurity risks), Defend (using AI to enhance cybersecurity), and Thwart (defending against adversarial AI uses)—onto the six core functions of NIST's Cybersecurity Framework 2.0.31

The profile addresses securing AI dependencies, integrating AI risks into organizational risk tolerance, deploying AI-augmented security teams, and detecting threats in supplier models. Public comments were solicited through January 30, 2026.32

Funding and Resources

NIST's AI work operates on federal appropriations, which have faced constraints despite expanded mandates. The agency's overall annual budget is approximately $1-1.3 billion, with the President's FY 2025 budget requesting $47.7 million specifically for AI research, testing infrastructure, risk management guidance, and frameworks.33

However, funding has consistently fallen short of requests since FY 2022. The Fiscal Responsibility Act of 2023 set discretionary spending limits through FY 2029, further constraining available resources.34 This persistent underfunding has prompted discussions about establishing an "agency-related foundation" to attract private investment for AI talent fellowships, rapid evaluations, and benchmark development, potentially bypassing federal procurement limitations.35

Recent major investments include:

  • $20 million for MITRE AI centers (manufacturing and critical infrastructure, 2025)36
  • $6 million for Carnegie Mellon AI cooperative research center (2024)37
  • $1.8 million in SBIR Phase I awards to 18 small businesses (2025)38
  • Up to $70 million over five years for an AI-focused Manufacturing USA institute (announced July 2024, awards pending)39

Leadership and Key Personnel

NIST's AI leadership structure centers on AISI's executive team:

RoleNameBackground
Director, AISIElizabeth KellyCoordinates activities across Commerce, NIST, and the federal government40
Chief Technology Officer, AISIElham TabassiChief of Staff in the Information Technology Laboratory (ITL) at NIST41
Head of AI SafetyPaul ChristianoFormer OpenAI leader and founder of the Alignment Research Center, a nonprofit focused on AI alignment research42
Chief Vision OfficerAdam RussellDirector of the Information Sciences Institute's AI Division at the University of Southern California43
Acting Chief Operating Officer and Chief of StaffMara CampbellFormer deputy chief operating officer at Commerce's Economic Development Administration44
Senior AdvisorRob ReichProfessor of political science at Stanford University and associate director of Stanford's Institute for Human-Centered AI45
Head of International EngagementMark LatoneroFormer deputy director of the National AI Initiative Office at the White House Office of Science and Technology Policy46

This leadership team shapes NIST's approach to AI safety evaluation, international standards coordination, and public-private partnership development.

Criticisms and Controversies

Technical Focus vs. Systemic Factors

Commenters on NIST's bias management proposal have raised concerns that the agency's approach focuses too heavily on technical and algorithmic solutions while neglecting the role of human decision-making and systemic factors.47 For example, commenters argued that accountability for bias is tied to human interventions in the decision-making process, and that defining societal bias purely as a byproduct of cognition fails to account for identifiable, hierarchical social systems that produce bias.48

NIST itself acknowledges these limitations. In a March 2022 report, the agency noted that AI bias extends beyond data quality to include human biases (such as subjective decisions in filling data gaps) and systemic biases rooted in institutional discrimination.49 Research lead Reva Schwartz emphasized the need for socio-technical approaches, stating that "organizations often default to overly technical solutions for AI bias issues" that "do not adequately capture the societal impact of AI systems," and that purely technical efforts to solve the problem of bias will come up short.50

Exclusion of Critical Risks from Misuse Frameworks

The Electronic Privacy Information Center (EPIC) criticized NIST AI 800-1 for deprioritizing bias, discrimination, hallucinations, and privacy risks in its guidance on dual-use foundation models.51 EPIC argues that threat actors exploit these very vulnerabilities for misuse, and excluding them creates dangerous blind spots. The organization recommended incorporating sociotechnical factors as required by Executive Order 14110.52

Lack of Concrete Solutions

NIST's AI security concept papers have been criticized for identifying enterprise challenges—such as opacity in model training, unclear data usage, and difficulties maintaining AI system inventories—without offering specific mitigations.53 Jeff Man, a senior information security consultant, noted visibility problems: "How do you actually, as an enterprise, gain insight into what AI is deployed, and the data it's been trained on?"54

Experts also worry whether traditional standards can adequately address emerging risks from agentic AI systems. Vince Worthington highlighted concerns about "cascading failures" where autonomous AI agents might create compounding problems, while Vince Berk of Apprentis Ventures expressed skepticism that standards processes can keep pace with AI threat evolution.55

AI-Enhanced Security Vulnerabilities

Research cited by NIST demonstrates concerning security implications of AI-assisted development. One study found that AI code improvements increase vulnerabilities by 37.6% after five iterations, emphasizing the critical need for human oversight.56 NIST acknowledges that AI accelerates various attack vectors including phishing, data poisoning, and coordinated campaigns by autonomous agents.57

The Institute for Security and Technology (IST) suggested NIST should treat AI models themselves as potential insider threats, where autonomous agents might self-evolve or collude to bypass security controls.58

Implementation Challenges

Organizations implementing the AI RMF face practical difficulties including efficiency losses, high resource and expertise demands, incomplete market adoption, and risk of over-complexity.59 The voluntary nature of NIST's frameworks means adoption varies widely, and many organizations lack the specialized knowledge needed to implement guidance effectively.

Additionally, some experts worry about AI systems undermining the standards development process itself. Erik Avakian of Info-Tech Research Group warned about AI-generated comments flooding NIST's public input processes, potentially drowning out legitimate stakeholder feedback.60

Impact and Effectiveness

NIST's influence on AI governance stems primarily from its convening power and standard-setting authority rather than regulatory enforcement. The agency's frameworks have been widely referenced in policy discussions and adopted by organizations seeking to demonstrate responsible AI practices.

In 2023, NIST released the AI Risk Management Framework and launched the Trustworthy AI Resource Center to help organizations manage AI risks.61 NIST is actively aligning its AI RMF with the Cybersecurity Framework (CSF) and Privacy Framework, helping organizations unify governance and risk programs under one umbrella.62

NIST's partnership model has proven effective at engaging major institutions. The AISIC includes over 200 members from academia, advocacy organizations, private industry, and the public sector.63 Voluntary agreements with frontier AI model developers enable NIST to conduct collaborative research and voluntary testing of industry models for priority national security capabilities that would otherwise be difficult for a government agency with limited resources.64

However, the effectiveness of NIST's work faces limitations. The $20 million investment to establish two AI centers focused on manufacturing productivity and critical infrastructure cybersecurity, while substantial, remains modest relative to private sector AI investment.65 NIST's investment remains modest relative to private sector AI investment. Funding has remained at a fractional level of the industries it is supposed to set standards for, and since FY22, NIST has received lower appropriations than it has requested. The agency is also struggling to attract the specialized science and technology talent it needs due to competition for technical talent and a lack of competitive pay compared to the private sector.66

NIST will rely on existing resources to build on its expertise and carry forward recommendations in the White House's July 2025 America's AI Action Plan, including efforts to accelerate AI innovation and build American AI infrastructure.67

International Engagement

NIST participates actively in international AI governance efforts through multiple channels. The agency engages with the Organisation for Economic Co-operation and Development (OECD), the Quadrilateral Security Dialogue, and bilateral initiatives across Asia, Europe, the Middle East, and North America, often partnering with the U.S. Department of State and International Trade Administration.68

The July 2024 Global AI Standards Engagement Plan (NIST AI 100-5) outlines NIST's strategy for promoting scientifically sound, accessible standards in international forums.69 This work aims to ensure U.S. technical approaches shape global AI standards development rather than being shaped by standards developed elsewhere.

NIST's international role includes contributing to AI safety institutes established by other countries. The agency participates in discussions and partnerships with counterpart organizations working on AI evaluation and safety testing as part of its broader engagement with U.S. and international AI governance efforts.68

Key Uncertainties

Several important questions remain about NIST's AI work:

  1. Resource Sufficiency: Can NIST effectively fulfill its expanded AI mandate given persistent funding constraints and the Fiscal Responsibility Act's spending limits through 2029?

  2. Standards Pace: Will voluntary standards development keep pace with rapid AI capability advances, particularly for agentic systems and novel architectures beyond current paradigms?

  3. International Influence: To what extent will NIST's technical approaches shape global AI standards versus being influenced by standards developed in other jurisdictions with different governance philosophies?

  4. Voluntary Adoption: How widely will organizations adopt NIST's voluntary frameworks, and will voluntary adoption prove sufficient to manage AI risks, or will future regulation mandate compliance?

  5. Evaluation Capabilities: Can NIST develop evaluation methods that effectively assess frontier AI systems' safety properties, especially for emergent capabilities and long-horizon risks?

  6. Private Sector Relationship: How will NIST's relationships with frontier AI developers evolve as commercial pressures potentially conflict with safety evaluation transparency?

  7. Technical vs. Governance Balance: Will NIST successfully integrate sociotechnical considerations into its frameworks despite its historical focus on technical measurement and standards?

The answers to these questions will significantly shape NIST's effectiveness as a central coordinator of U.S. AI safety and governance efforts in coming years.

Sources

Footnotes

  1. NIST Artificial Intelligence OverviewNIST Artificial Intelligence Overview

  2. NIST TimelineNIST Timeline

  3. NIST AI Risk Management FrameworkNIST AI Risk Management Framework

  4. AISI Strategic Vision DocumentAISI Strategic Vision Document

  5. NIST News: Commerce Secretary Announces AISI LeadershipNIST News: Commerce Secretary Announces AISI Leadership

  6. NIST ITL About ITL - ITL History TimelineNIST ITL About ITL - ITL History Timeline

  7. NIST Cyber HistoryNIST Cyber History

  8. NIST Artificial Intelligence - AI ResearchNIST Artificial Intelligence - AI Research

  9. NIST AI RMF - Palo Alto Networks CyberpediaNIST AI RMF - Palo Alto Networks Cyberpedia

  10. NIST AI RMF - AuditBoard BlogNIST AI RMF - AuditBoard Blog

  11. Tech Policy Press: Unpacking New NIST Guidance on AITech Policy Press: Unpacking New NIST Guidance on AI

  12. AISI Strategic Vision DocumentAISI Strategic Vision Document

  13. NIST News: Commerce Secretary Announces AISI LeadershipNIST News: Commerce Secretary Announces AISI Leadership

  14. Nextgov: NIST Adds 5 New Members to AI Safety InstituteNextgov: NIST Adds 5 New Members to AI Safety Institute

  15. NIST AI RMF - Palo Alto Networks CyberpediaNIST AI RMF - Palo Alto Networks Cyberpedia

  16. NIST AI Risk Management FrameworkNIST AI Risk Management Framework

  17. NIST AI RMF - Palo Alto Networks CyberpediaNIST AI RMF - Palo Alto Networks Cyberpedia

  18. ISPartners: NIST AI RMF 2025 UpdatesISPartners: NIST AI RMF 2025 Updates

  19. AISI Strategic Vision DocumentAISI Strategic Vision Document

  20. Husch Blackwell: NIST Introduces AI Safety Institute Leaders and ConsortiumHusch Blackwell: NIST Introduces AI Safety Institute Leaders and Consortium

  21. NIST News: NIST Launches Centers for AI in Manufacturing and Critical InfrastructureNIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure

  22. NIST AI ConsortiumNIST AI Consortium

  23. ANSI News: NIST Launches Pilot Project to Propel AI InnovationANSI News: NIST Launches Pilot Project to Propel AI Innovation

  24. NIST AI 100-5: A Plan for Global Engagement on AI Standards (PDF)NIST AI 100-5: A Plan for Global Engagement on AI Standards (PDF)

  25. Citation rc-cddc (data unavailable — rebuild with wiki-server access)

  26. King & Spalding: NIST Releases Series of AI Guidelines, SoftwareKing & Spalding: NIST Releases Series of AI Guidelines, Software

  27. NIST News: NIST Launches Centers for AI in Manufacturing and Critical InfrastructureNIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure

  28. CMU News: NIST Awards $6M to Carnegie Mellon UniversityCMU News: NIST Awards $6M to Carnegie Mellon University

  29. NIST News: NIST Awards Over $1.8 Million to Small BusinessesNIST News: NIST Awards Over $1.8 Million to Small Businesses

  30. JD Supra: AI Risk Meets Cyber Governance - NIST's Cybersecurity Framework ProfileJD Supra: AI Risk Meets Cyber Governance - NIST's Cybersecurity Framework Profile

  31. Inside Privacy: NIST Publishes Preliminary Draft of Cybersecurity Framework Profile for AIInside Privacy: NIST Publishes Preliminary Draft of Cybersecurity Framework Profile for AI

  32. Crowell: NIST Releases Draft Framework for AI CybersecurityCrowell: NIST Releases Draft Framework for AI Cybersecurity

  33. Ropes Data Philes: A Very Merry NISTmas - 2024 UpdatesRopes Data Philes: A Very Merry NISTmas - 2024 Updates

  34. FAS: NIST FoundationFAS: NIST Foundation

  35. FAS: NIST FoundationFAS: NIST Foundation

  36. NIST News: NIST Launches Centers for AI in Manufacturing and Critical InfrastructureNIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure

  37. CMU News: NIST Awards $6M to Carnegie Mellon UniversityCMU News: NIST Awards $6M to Carnegie Mellon University

  38. Citation rc-7586 (data unavailable — rebuild with wiki-server access)

  39. NIST News: NIST Announces Funding Opportunity for AI-Focused Manufacturing USA InstituteNIST News: NIST Announces Funding Opportunity for AI-Focused Manufacturing USA Institute

  40. Husch Blackwell: NIST Introduces AI Safety Institute Leaders and ConsortiumHusch Blackwell: NIST Introduces AI Safety Institute Leaders and Consortium

  41. Digital.gov: Overview of NIST Initiatives on AI StandardsDigital.gov: Overview of NIST Initiatives on AI Standards

  42. Nextgov: NIST Adds 5 New Members to AI Safety InstituteNextgov: NIST Adds 5 New Members to AI Safety Institute

  43. Nextgov: NIST Adds 5 New Members to AI Safety InstituteNextgov: NIST Adds 5 New Members to AI Safety Institute

  44. Nextgov: NIST Adds 5 New Members to AI Safety InstituteNextgov: NIST Adds 5 New Members to AI Safety Institute

  45. Nextgov: NIST Adds 5 New Members to AI Safety InstituteNextgov: NIST Adds 5 New Members to AI Safety Institute

  46. Nextgov: NIST Adds 5 New Members to AI Safety InstituteNextgov: NIST Adds 5 New Members to AI Safety Institute

  47. NIST: Comments Received on Proposal for Identifying and Managing Bias in AINIST: Comments Received on Proposal for Identifying and Managing Bias in AI

  48. NIST: Comments Received on Proposal for Identifying and Managing Bias in AINIST: Comments Received on Proposal for Identifying and Managing Bias in AI

  49. NIST News: There's More to AI Bias Than Biased DataNIST News: There's More to AI Bias Than Biased Data

  50. NIST News: There's More to AI Bias Than Biased DataNIST News: There's More to AI Bias Than Biased Data

  51. EPIC: Comments to NIST on Managing the Risks of Misuse with AI Foundation ModelsEPIC: Comments to NIST on Managing the Risks of Misuse with AI Foundation Models

  52. EPIC: Comments to NIST on Managing the Risks of Misuse with AI Foundation ModelsEPIC: Comments to NIST on Managing the Risks of Misuse with AI Foundation Models

  53. CSO Online: NIST's Attempts to Secure AI Yields Many Questions, No AnswersCSO Online: NIST's Attempts to Secure AI Yields Many Questions, No Answers

  54. CSO Online: NIST's Attempts to Secure AI Yields Many Questions, No AnswersCSO Online: NIST's Attempts to Secure AI Yields Many Questions, No Answers

  55. Citation rc-e946 (data unavailable — rebuild with wiki-server access)

  56. Nextgov: Artificial Intelligence Friend, Foe, or Frenemy - NIST Wants to Find OutNextgov: Artificial Intelligence Friend, Foe, or Frenemy - NIST Wants to Find Out

  57. Nextgov: Artificial Intelligence Friend, Foe, or Frenemy - NIST Wants to Find OutNextgov: Artificial Intelligence Friend, Foe, or Frenemy - NIST Wants to Find Out

  58. IST: Managing Misuse - IST Submits CommentsIST: Managing Misuse - IST Submits Comments

  59. Lumenova AI: Pros and Cons of Implementing the NIST AI RMFLumenova AI: Pros and Cons of Implementing the NIST AI RMF

  60. CSO Online: NIST's Attempts to Secure AI Yields Many Questions, No AnswersCSO Online: NIST's Attempts to Secure AI Yields Many Questions, No Answers

  61. Future of Life Institute: NISTFuture of Life Institute: NIST

  62. ISPartners: NIST AI RMF 2025 UpdatesISPartners: NIST AI RMF 2025 Updates

  63. Husch Blackwell: NIST Introduces AI Safety Institute Leaders and ConsortiumHusch Blackwell: NIST Introduces AI Safety Institute Leaders and Consortium

  64. NIST News: NIST Launches Centers for AI in Manufacturing and Critical InfrastructureNIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure

  65. Industrial Cyber: NIST, MITRE Invest $20 Million in AI CentersIndustrial Cyber: NIST, MITRE Invest $20 Million in AI Centers

  66. FAS: NIST FoundationFAS: NIST Foundation

  67. NIST News: NIST Launches Centers for AI in Manufacturing and Critical InfrastructureNIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure

  68. NIST: Technical Contributions to AI GovernanceNIST: Technical Contributions to AI Governance 2

  69. NIST AI 100-5: A Plan for Global Engagement on AI Standards (PDF)NIST AI 100-5: A Plan for Global Engagement on AI Standards (PDF)

References

Claims (2)
Additional partnerships include a \$6 million center with Carnegie Mellon University for cooperative AI testing and evaluation research, and over \$1.8 million in Small Business Innovation Research (SBIR) awards to 18 companies developing AI-related products, with Phase II funding available up to \$400,000.
Not verifiable50%Feb 22, 2026
U.S. Secretary of Commerce Gina Raimondo (opens in new window) announced Sept. 24 that the Department of Commerce’s National Institute of Standards and Technology (opens in new window) (NIST) has awarded $6 million to Carnegie Mellon University to establish a joint center to support cooperative research and experimentation for the test and evaluation of modern AI capabilities and tools.

Failed to parse LLM response

- \$6 million for Carnegie Mellon AI cooperative research center (2024)
Accurate100%Feb 22, 2026
U.S. Secretary of Commerce Gina Raimondo (opens in new window) announced Sept. 24 that the Department of Commerce’s National Institute of Standards and Technology (opens in new window) (NIST) has awarded $6 million to Carnegie Mellon University to establish a joint center to support cooperative research and experimentation for the test and evaluation of modern AI capabilities and tools.
2NIST AI Consortiumnist.gov·Government
Claims (1)
The NIST AI Consortium represents a major public-private partnership, with over 280 organizations from industry, academia, and civil society participating through Cooperative Research and Development Agreements (CRADAs). The consortium develops science-based guidelines and standards for AI measurement through open collaborative research, creating a foundation for global AI metrology.
Accurate100%Feb 22, 2026
The Consortium brings together more than 280 organizations to develop science-based and empirically backed guidelines and standards for AI measurement.
Claims (1)
The agency's overall annual budget is approximately \$1-1.3 billion, with the President's FY 2025 budget requesting \$47.7 million specifically for AI research, testing infrastructure, risk management guidance, and frameworks.
Minor issues90%Feb 22, 2026
The President’s FY 2025 budget request for NIST, totaling $1.5 billion, included $50 million to conduct AI research, establish testing infrastructure, develop technical guidance to measure and manage AI risks, and implement best practices and frameworks.

The claim states the agency's overall annual budget is approximately $1-1.3 billion, but the source states the President's FY 2025 budget request for NIST is $1.5 billion, not the overall annual budget. The claim states the President's FY 2025 budget requested $47.7 million for AI research, but the source states $50 million.

Claims (1)
The Institute for Security and Technology (IST) suggested NIST should treat AI models themselves as potential insider threats, where autonomous agents might self-evolve or collude to bypass security controls.
Accurate100%Feb 22, 2026
IST aligns with NIST’s Practice 3.1, #3 (line 20) on insider threats, and recommends the publication also incorporate the concept of an AI model itself constituting an insider threat. While this is a newer concern, our initial inquiries into AI agents (discussed above) and discussions with experts on the topic of “AI control” brings to mind scenarios in which increasingly capable and autonomous AI agents operating within an authorized context might self-evolve and eventually deviate from their intended scope—potentially even colluding with other agents to bypass controls.
Claims (2)
NIST also released a Global AI Standards Engagement Plan (NIST AI 100-5) in July 2024, outlining the agency's approach to international AI standards development. The plan addresses both "horizontal" (cross-sector) and "vertical" (sector-specific) standards needs, aiming to ensure scientifically sound, accessible standards globally.
The July 2024 Global AI Standards Engagement Plan (NIST AI 100-5) outlines NIST's strategy for promoting scientifically sound, accessible standards in international forums. This work aims to ensure U.S.
Claims (1)
On December 16, 2025, NIST released a preliminary draft of its Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile, NISTIR 8596). This profile maps three AI focus areas—Secure (managing AI system cybersecurity risks), Defend (using AI to enhance cybersecurity), and Thwart (defending against adversarial AI uses)—onto the six core functions of NIST's Cybersecurity Framework 2.0.
Minor issues90%Feb 22, 2026
On December 16, 2025, the U.S. National Institute of Standards and Technology (&ldquo;NIST&rdquo;) published a preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (&ldquo;Cyber AI Profile&rdquo; or &ldquo;Profile&rdquo;). According to the draft, the Cyber AI Profile is intended to &ldquo;provide guidelines for managing cybersecurity risk related to AI systems [and] identify[] opportunities for using AI to enhance cybersecurity capabilities.&rdquo; The draft Profile uses the existing voluntary NIST Cybersecurity Framework (&ldquo;CSF&rdquo;) 2.0 &mdash; which &ldquo;provides guidance to industry, government agencies, and other organizations to manage cybersecurity risks&rdquo; &mdash; and overlays three AI Focus Areas (Secure, Detect, Thwart) on top of the CSF&rsquo;s outcomes (Functions, Categories, and Subcategories) to suggest considerations for organizations to prioritize when securing AI implementations, using AI to enhance cybersecurity defenses, or defending against adversarial uses of AI.

The claim states that the profile maps the AI focus areas onto the six core functions of NIST's Cybersecurity Framework 2.0. However, the source states that the profile overlays the three AI Focus Areas on top of the CSF's outcomes (Functions, Categories, and Subcategories). The claim includes the NISTIR number for the Cyber AI Profile, but this is not mentioned in the source.

Claims (1)
The \$20 million investment to establish two AI centers focused on manufacturing productivity and critical infrastructure cybersecurity, while substantial, remains modest relative to private sector AI investment. NIST's investment remains modest relative to private sector AI investment.
Not verifiable50%Feb 22, 2026
Through this award, NIST is investing $20 million to establish two AI centers, namely the AI Economic Security Center for U.S. Manufacturing Productivity and the AI Economic Security Center to Secure U.S. Critical Infrastructure from Cyberthreats, which will advance the delivery of AI-based technology solutions, strengthening U.S. manufacturing and cybersecurity for critical infrastructure.

Failed to parse LLM response

Claims (5)
The AISIC includes over 200 members from academia, advocacy organizations, private industry, and the public sector. Voluntary agreements with frontier AI model developers enable NIST to conduct collaborative research and voluntary testing of industry models for priority national security capabilities that would otherwise be difficult for a government agency with limited resources.
Minor issues80%Feb 22, 2026
CAISI has established voluntary agreements with multiple developers of leading-edge or “frontier” AI models to enable collaborative research and voluntary testing of industry models for priority national security capabilities.

The claim states that the AISIC includes over 200 members, but the source mentions CAISI, not AISIC. The source does not specify the number of members in CAISI. The claim states that voluntary agreements with frontier AI model developers enable NIST to conduct collaborative research and voluntary testing of industry models for priority national security capabilities. The source states that CAISI has established voluntary agreements with multiple developers of leading-edge or “frontier” AI models to enable collaborative research and voluntary testing of industry models for priority national security capabilities.

and adversary AI systems and has established voluntary agreements with multiple developers of cutting-edge AI models for collaborative research and testing.
Accurate100%Feb 22, 2026
CAISI has established voluntary agreements with multiple developers of leading-edge or “frontier” AI models to enable collaborative research and voluntary testing of industry models for priority national security capabilities.
Critical Infrastructure from Cyberthreats**. These centers focus on developing technology evaluations and agentic AI tools to enhance critical infrastructure security and manufacturing competitiveness.
Accurate100%Feb 22, 2026
The AI Economic Security Center for U.S. Manufacturing Productivity and the AI Economic Security Center to Secure U.S. Critical Infrastructure from Cyberthreats will drive the development and adoption of AI-driven tools, or “agents,” in these two national priority areas.
+2 more claims
Claims (1)
Commenters on NIST's bias management proposal have raised concerns that the agency's approach focuses too heavily on technical and algorithmic solutions while neglecting the role of human decision-making and systemic factors. For example, commenters argued that accountability for bias is tied to human interventions in the decision-making process, and that defining societal bias purely as a byproduct of cognition fails to account for identifiable, hierarchical social systems that produce bias.
Accurate100%Feb 22, 2026
The University of British Columbia Mike Zajko 731 Appendix A defines "societal bias" (a term that does a lot of work in this document) from a social psychology textbook as "an adaptable byproduct of human cognition". This does not reflect the understanding of these notions in domains such as sociology (my field) and notable works in the critical AI literature. These biases aren't just byproducts of cognition, but products of identifiable, hierarchical social systems (sexism is not a byproduct).
Claims (1)
The agency's AI work accelerated dramatically with the release of the AI Risk Management Framework (AI RMF 1.0) in January 2023, following extensive public consultation. This voluntary framework provided organizations with a structured approach to managing AI risks through four core functions: Govern, Map, Measure, and Manage.
Minor issues85%Feb 22, 2026
TLDR: The NIST AI Risk Management Framework (NIST AI RMF) is a voluntary guideline designed to help organizations identify, assess, and manage risks associated with artificial intelligence (AI). The framework was designed with the principles of Map, Measure, Manage, and Govern to develop trustworthy, transparent, and ethical AI systems, ensuring responsible AI adoption across various industries.

The claim states the AI RMF 1.0 was released in January 2023, but the source does not provide the release date. The claim states the framework was released following extensive public consultation, but the source only mentions public workshops to gather input and discuss updates to the AI RMF.

Claims (1)
AI Safety Institute (AISI)** within the agency. AISI published an initial draft report for Managing the Risk of Misuse for Dual-Use Foundation Models (AI 800-1).
Accurate100%Feb 22, 2026
The US AI Safety Institute, which is housed within NIST and was created to carry out priorities outlined in Biden’s AI executive order, published an initial draft report for Managing the Risk of Misuse for Dual-Use Foundation Models ( AI 800-1 ).
Claims (3)
NIST's AI security concept papers have been criticized for identifying enterprise challenges—such as opacity in model training, unclear data usage, and difficulties maintaining AI system inventories—without offering specific mitigations. Jeff Man, a senior information security consultant, noted visibility problems: "How do you actually, as an enterprise, gain insight into what AI is deployed, and the data it's been trained on?"
Accurate100%Feb 22, 2026
&ldquo;AI was all the hype at RSA, Blackhat and Defcon. It was at the beginning and end of every vendor sentence,&rdquo; said Jeff Man, an industry veteran who today serves as the senior information security consultant at Online Business Systems. &ldquo;It was amazing how AI was going to solve all of the problems [and] we were also discovering amazing vulnerabilities.&rdquo; Man also stressed the visibility issues, especially in terms of how AI is deployed company-wide. &ldquo;Have an inventory and know what you are dealing with. But I am not sure it&rsquo;s even possible to take a complete inventory of what is out there. You have to assume a doomsday scenario.&rdquo;
Vince Worthington highlighted concerns about "cascading failures" where autonomous AI agents might create compounding problems, while Vince Berk of Apprentis Ventures expressed skepticism that standards processes can keep pace with AI threat evolution.
Accurate100%Feb 22, 2026
Forrester’s Worthington stressed that CISOs need to carefully review all current cybersecurity tools because they may not be especially effective at protecting the enterprise from relatively new AI threats. “AI agents and agentic systems introduce new risks that traditional security models are ill-equipped to manage,” Worthington said. “We are seeing growing concerns around the lack of mature detection surfaces, the risk of cascading failures, and the challenge of securing intent rather than just outcomes.” Vince Berk, partner at Apprentis Ventures, was even more skeptical that current standards efforts will be able to make a meaningful difference in protecting companies from AI threats.
Erik Avakian of Info-Tech Research Group warned about AI-generated comments flooding NIST's public input processes, potentially drowning out legitimate stakeholder feedback.
Accurate100%Feb 22, 2026
Erik Avakian, technical counselor at Info-Tech Research Group, said that he applauds NIST&rsquo;s efforts to reach out for community feedback, but he also cautioned that it might backfire. For example, what if AI agents flood the comments with self-serving suggestions?
13NIST Timelinenist.gov·Government
Claims (1)
Department of Commerce agency that has become central to American artificial intelligence governance through its development of measurement standards, risk management frameworks, and safety guidelines. Founded in 1901 as the National Bureau of Standards, NIST's AI work began in earnest around 2016-2018, though it has been involved in computing standards since the 1960s.
Claims (1)
NIST's research portfolio includes applied AI work in advanced materials discovery, robotic manufacturing, wireless systems, and cybersecurity. The agency released Dioptra, an open-source software tool for adversarial AI testing, enabling organizations to evaluate how adversarial attacks affect AI system performance.
Minor issues85%Feb 22, 2026
NIST aims to assist organizations through the release of its own open source software tool, Dioptra , which tests the effects of adversarial attacks on AI systems.

The source does not mention NIST's research portfolio including applied AI work in advanced materials discovery, robotic manufacturing, wireless systems, and cybersecurity. The source states that NIST released details about adversarial machine learning in January 2024, not that they released Dioptra in January 2024.

Claims (2)
NIST's research portfolio includes applied AI work in advanced materials discovery, robotic manufacturing, wireless systems, and cybersecurity. The agency released Dioptra, an open-source software tool for adversarial AI testing, enabling organizations to evaluate how adversarial attacks affect AI system performance.
Minor issues85%Feb 22, 2026
FARSAIT’s applied AI research promotes applying AI techniques to NIST research programs in areas including advanced materials discovery, robotic systems in manufacturing environments, and wireless networked control systems.

The claim mentions cybersecurity as an area of applied AI work, but the source only mentions advanced materials discovery, robotic systems in manufacturing environments, and wireless networked control systems. The claim mentions the release of Dioptra, an open-source software tool for adversarial AI testing, but this is not mentioned in the source.

NIST's dedicated AI program began taking shape around 2016-2018 with the Fundamental and Applied Research and Standards for AI Technologies (FARSAIT) initiative. This program aimed to develop comprehensive guidance on trustworthy AI systems, including terminology, taxonomy, and measurement approaches.
Accurate100%Feb 22, 2026
In FY 2018, NIST initiated the Fundamental and Applied Research and Standards for AI Technologies (FARSAIT) program, which is designed to advance the fundamental and applied AI research at NIST. FARSAIT’s fundamental AI research aims to develop a metrologist’s guide to AI systems that addresses the complex intertwinement of different aspects of trustworthy AI as well as terminology and taxonomy as it relates to the several layers of the AI space.
Claims (2)
AISI operates with three core goals: advancing AI safety science through model testing and red-teaming, disseminating safety practices through guidelines and tools, and supporting stakeholder coordination. The institute established the AI Safety Institute Consortium (AISIC) in February 2024, bringing together over 200 members from academia, advocacy organizations, private industry, and government to develop standards collaboratively.
Minor issues90%Feb 22, 2026
The USAISI Consortium, tagged with the acronym AISIC, includes over 200 members from academia, advocacy organizations, private industry, and the public sector.

The claim that AISI operates with three core goals is unsupported by the provided source. The source states the AISIC was announced on February 8, 2024, not established in February 2024.

The AISIC includes over 200 members from academia, advocacy organizations, private industry, and the public sector. Voluntary agreements with frontier AI model developers enable NIST to conduct collaborative research and voluntary testing of industry models for priority national security capabilities that would otherwise be difficult for a government agency with limited resources.
Minor issues85%Feb 22, 2026
The USAISI Consortium, tagged with the acronym AISIC, includes over 200 members from academia, advocacy organizations, private industry, and the public sector.

The claim mentions voluntary agreements with frontier AI model developers enabling NIST to conduct collaborative research and voluntary testing of industry models for priority national security capabilities. This is not explicitly mentioned in the source. The source mentions that members are required to enter into a consortium Cooperative Research and Development Agreement (CRADA) with NIST.

Claims (1)
On December 16, 2025, NIST released a preliminary draft of its Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile, NISTIR 8596). This profile maps three AI focus areas—Secure (managing AI system cybersecurity risks), Defend (using AI to enhance cybersecurity), and Thwart (defending against adversarial AI uses)—onto the six core functions of NIST's Cybersecurity Framework 2.0.
Minor issues90%Feb 22, 2026
On December 16, 2025, the National Institute of Standards and Technology (&ldquo;NIST&rdquo;), a non-regulatory federal agency within the U.S. Department of Commerce that promotes innovation through technical standards setting, released a preliminary draft of its forthcoming Cyber AI Profile.

The claim mentions NISTIR 8596, but the source does not provide this identifier.

Claims (1)
One study found that AI code improvements increase vulnerabilities by 37.6% after five iterations, emphasizing the critical need for human oversight. NIST acknowledges that AI accelerates various attack vectors including phishing, data poisoning, and coordinated campaigns by autonomous agents.
Accurate100%Feb 22, 2026
After just five rounds of AI changes, there was a 37.6% increase in critical vulnerabilities that other AIs could easily exploit.
Claims (1)
Organizations implementing the AI RMF face practical difficulties including efficiency losses, high resource and expertise demands, incomplete market adoption, and risk of over-complexity. The voluntary nature of NIST's frameworks means adoption varies widely, and many organizations lack the specialized knowledge needed to implement guidance effectively.
Accurate100%Feb 22, 2026
Potential Efficiency Losses The primary challenge is the additional time and resources required. Formalizing governance processes, documenting risk controls, and running ongoing evaluations can slow product development cycles and delay the release of new features.
Claims (1)
| Chief Technology Officer, AISI | Elham Tabassi | Chief of Staff in the Information Technology Laboratory (ITL) at NIST |
Inaccurate80%Feb 22, 2026
Elham Tabassi is the chief of staff in the Information Technology Laboratory (ITL) at NIST.

WRONG ATTRIBUTION: The claim incorrectly identifies Elham Tabassi as the Chief Technology Officer of AISI. The source identifies her as the chief of staff in the Information Technology Laboratory (ITL) at NIST.

Claims (3)
Initiated in 2021 and released in January 2023, the framework addresses trustworthy AI attributes including validity, reliability, safety, security, and resilience. On July 26, 2024, NIST released NIST AI 600-1, the Generative AI Profile, which identifies unique risks posed by generative AI systems and proposes tailored management actions for organizations based on their goals and priorities.
Minor issues80%Feb 22, 2026
The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) emerged as a response to the growing complexities and potential risks associated with artificial intelligence systems. Initiated in 2021 and released in January 2023, this framework represents a collaborative effort between NIST and a diverse array of stakeholders from both public and private sectors.

The source does not mention NIST AI 600-1, the Generative AI Profile, or the release date of July 26, 2024. The source only mentions trustworthy AI attributes including validity, reliability, safety, security, and resilience.

At its core, the AI RMF is built on four functions: Govern, Map, Measure, and Manage, providing a systematic approach to improving the robustness and reliability of AI systems. Updated guidance released in 2025 expanded the framework to address supply chain vulnerabilities, model provenance, data integrity, and third-party risks, while introducing maturity model guidance for measuring organizational AI risk management capabilities.
Minor issues85%Feb 22, 2026
At its core, the NIST AI RMF is built on four functions: Govern, Map, Measure, and Manage.

The source does not mention that the guidance was updated in 2025. It states that the framework was initiated in 2021 and released in January 2023. The claim mentions that the updated guidance expanded the framework to address supply chain vulnerabilities, model provenance, data integrity, and third-party risks. While the source mentions data integrity, it does not explicitly mention supply chain vulnerabilities, model provenance, or third-party risks. The claim mentions the introduction of maturity model guidance for measuring organizational AI risk management capabilities. This is not explicitly mentioned in the source.

The agency's AI work accelerated dramatically with the release of the AI Risk Management Framework (AI RMF 1.0) in January 2023, following extensive public consultation. This voluntary framework provided organizations with a structured approach to managing AI risks through four core functions: Govern, Map, Measure, and Manage.
Accurate100%Feb 22, 2026
The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) emerged as a response to the growing complexities and potential risks associated with artificial intelligence systems. Initiated in 2021 and released in January 2023, this framework represents a collaborative effort between NIST and a diverse array of stakeholders from both public and private sectors.
Claims (2)
The Fiscal Responsibility Act of 2023 set discretionary spending limits through FY 2029, further constraining available resources. This persistent underfunding has prompted discussions about establishing an "agency-related foundation" to attract private investment for AI talent fellowships, rapid evaluations, and benchmark development, potentially bypassing federal procurement limitations.
Minor issues85%Feb 22, 2026
An agency-related foundation could play a crucial role in addressing these challenges and strengthening NIST&#8217;s AI mission.

The Fiscal Responsibility Act of 2023 set discretionary spending limits through FY26 through FY29, not FY29 as stated in the claim. The claim mentions 'AI talent fellowships, rapid evaluations, and benchmark development' as potential uses of private investment. The source mentions these as ideas from a June 2024 paper from the Institute for Progress, not necessarily as established plans.

The agency is also struggling to attract the specialized science and technology talent it needs due to competition for technical talent and a lack of competitive pay compared to the private sector.
Accurate100%Feb 22, 2026
In addition, NIST is struggling to attract the specialized science and technology (S&T) talent that it needs due to competition for technical talent, a lack of competitive pay compared to the private sector, a gender-imbalanced culture, and issues with transferring institutional knowledge when individuals transition out of the agency, according to a February 2023 Government Accountability Office report.
23NIST Cyber Historycsrc.nist.gov·Government
Claims (1)
While NIST has conducted computing research since the mid-1960s—including developing MAGIC, one of the first intelligent computer graphics terminals—its explicit focus on artificial intelligence emerged much later. The agency's Information Technology Laboratory (ITL) built capabilities in cryptography, biometrics, and data processing standards through the 1990s and 2000s, establishing foundations for later AI work.
Claims (1)
While NIST has conducted computing research since the mid-1960s—including developing MAGIC, one of the first intelligent computer graphics terminals—its explicit focus on artificial intelligence emerged much later. The agency's Information Technology Laboratory (ITL) built capabilities in cryptography, biometrics, and data processing standards through the 1990s and 2000s, establishing foundations for later AI work.
Minor issues85%Feb 22, 2026
Mid 1960s - MAGIC, one of the first intelligent computer graphics terminals, developed for federal agencies.

The claim says NIST has conducted computing research since the mid-1960s, but the source says "Automatic Data Processing (ADP) standards development at NBS mandated by Brooks Act (P. L. 89-306)" in 1965. This is not necessarily research. The claim says the agency's Information Technology Laboratory (ITL) built capabilities in cryptography, biometrics, and data processing standards through the 1990s and 2000s, but the source does not explicitly state that ITL built capabilities in these areas. It only lists events related to these topics. The claim says the agency's Information Technology Laboratory (ITL) built capabilities in cryptography, biometrics, and data processing standards through the 1990s and 2000s, establishing foundations for later AI work, but the source does not mention that these capabilities established foundations for later AI work.

Claims (1)
Public comments were solicited through January 30, 2026.
Claims (1)
- Up to \$70 million over five years for an AI-focused Manufacturing USA institute (announced July 2024, awards pending)
Accurate100%Feb 22, 2026
NIST anticipates funding up to $70 million over a five-year period, subject to the availability of federal funds, for the recipient to establish and operate the new institute.
Claims (1)
Department of State and International Trade Administration.
Accurate100%Feb 22, 2026
NIST partners with other agencies including the U.S. Department of Commerce’s International Trade Administration and the U.S. Department of State on many of these efforts.
Claims (1)
In 2023, NIST released the AI Risk Management Framework and launched the Trustworthy AI Resource Center to help organizations manage AI risks. NIST is actively aligning its AI RMF with the Cybersecurity Framework (CSF) and Privacy Framework, helping organizations unify governance and risk programs under one umbrella.
Accurate100%Feb 22, 2026
In 2023, NIST released the AI Risk Management Framework and launched the Trustworthy AI Resource Center to help organizations manage AI risks.
Claims (2)
In February 2024, Commerce Secretary Gina Raimondo announced the inaugural leadership team, appointing Elizabeth Kelly as director and Elham Tabassi as chief technology officer. The team expanded with five additional senior leaders, including <EntityLink id="paul-christiano">Paul Christiano</EntityLink> (former OpenAI leader and founder of the nonprofit Alignment Research Center) as Head of AI Safety, Adam Russell (director of the Information Sciences Institute's AI Division at the University of Southern California) as Chief Vision Officer, Mara Campbell as acting chief operating officer and chief of staff, Rob Reich as senior advisor, and Mark Latonero as head of international engagement.
Inaccurate60%Feb 22, 2026
U.S. Secretary of Commerce Gina Raimondo announced today key members of the executive leadership team to lead the U.S. AI Safety Institute (AISI), which will be established at the National Institute for Standards and Technology (NIST). Raimondo named Elizabeth Kelly to lead the institute as its inaugural director and Elham Tabassi to serve as chief technology officer.

unsupported: The source does not mention Paul Christiano, Adam Russell, Mara Campbell, Rob Reich, or Mark Latonero. minor_issues: The announcement was made on February 7, 2024, not in February 2024.

AI Safety Institute (AISI)** within NIST and gave the agency new mandates for AI system evaluation, red-teaming, and international standards coordination. NIST's work spans fundamental research, applied projects in manufacturing and cybersecurity, and the convening of large multi-stakeholder consortia to develop practical guidance for AI deployment.
Minor issues85%Feb 22, 2026
The U.S. AI Safety Institute was established under NIST at the direction of President Biden to support the responsibilities assigned to the Department of Commerce under the president’s landmark Executive Order.

The source only mentions the establishment of the AI Safety Institute (AISI) within NIST and the appointment of its leadership. It does not explicitly state that NIST was given 'new mandates for AI system evaluation, red-teaming, and international standards coordination.' The source mentions NIST's role in developing guidelines and standards for AI measurement but does not provide specific details about NIST's work spanning fundamental research, applied projects in manufacturing and cybersecurity, and the convening of large multi-stakeholder consortia to develop practical guidance for AI deployment.

Claims (2)
Additional partnerships include a \$6 million center with Carnegie Mellon University for cooperative AI testing and evaluation research, and over \$1.8 million in Small Business Innovation Research (SBIR) awards to 18 companies developing AI-related products, with Phase II funding available up to \$400,000.
Not verifiable50%Feb 22, 2026
The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has awarded over $1.8 million to 18 small businesses under the Small Business Innovation Research (SBIR) program.

Failed to parse LLM response

- \$1.8 million in SBIR Phase I awards to 18 small businesses (2025)
Accurate100%Feb 22, 2026
The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has awarded over $1.8 million to 18 small businesses under the Small Business Innovation Research (SBIR) program.
31AISI Strategic Vision Documentnist.gov·Government
Claims (3)
NIST's core AI mission focuses on promoting "trustworthy AI" through science-based standards and voluntary frameworks rather than regulation. The agency emphasizes that "safety breeds trust, trust enables adoption, and adoption accelerates innovation" as its guiding principle. This approach positions NIST as a coordinator between government, industry, and academia, creating consensus standards that organizations can voluntarily adopt.
AI Safety Institute (AISI)** within the agency. AISI published an initial draft report for Managing the Risk of Misuse for Dual-Use Foundation Models (AI 800-1).
AISI operates with three core goals: advancing AI safety science through model testing and red-teaming, disseminating safety practices through guidelines and tools, and supporting stakeholder coordination. The institute established the AI Safety Institute Consortium (AISIC) in February 2024, bringing together over 200 members from academia, advocacy organizations, private industry, and government to develop standards collaboratively.
Claims (2)
The Electronic Privacy Information Center (EPIC) criticized NIST AI 800-1 for deprioritizing bias, discrimination, hallucinations, and privacy risks in its guidance on dual-use foundation models. EPIC argues that threat actors exploit these very vulnerabilities for misuse, and excluding them creates dangerous blind spots.
Accurate100%Feb 22, 2026
As AISI finalizes NIST AI 800-1, however, EPIC encourages the Institute to reconsider its dismissal and deprioritization of bias risks and privacy risks, respectively. Without adequate privacy and bias safeguards in place, AI developers cannot effectively identify, assess, and mitigate misuse risks within dual-use foundation models.
The organization recommended incorporating sociotechnical factors as required by Executive Order 14110.
Accurate100%Feb 22, 2026
EPIC’s recommendations align closely to the goals of Executive Order 14110 and the NIST AI RMF to increase the safety, equity, and reliability of AI technologies, and we strongly believe that incorporating more sociotechnical factors into NIST AI 800-1, such as data privacy vulnerabilities and bias risks, will only improve the effectiveness of techniques to manage the misuse risks of dual-use foundation models.
Claims (1)
In a March 2022 report, the agency noted that AI bias extends beyond data quality to include human biases (such as subjective decisions in filling data gaps) and systemic biases rooted in institutional discrimination. Research lead Reva Schwartz emphasized the need for socio-technical approaches, stating that "organizations often default to overly technical solutions for AI bias issues" that "do not adequately capture the societal impact of AI systems," and that purely technical efforts to solve the problem of bias will come up short.
Accurate100%Feb 22, 2026
“Organizations often default to overly technical solutions for AI bias issues,” Schwartz said. “But these approaches do not adequately capture the societal impact of AI systems. The expansion of AI into many aspects of public life requires extending our view to consider AI within the larger social system in which it operates.”
Claims (6)
In February 2024, Commerce Secretary Gina Raimondo announced the inaugural leadership team, appointing Elizabeth Kelly as director and Elham Tabassi as chief technology officer. The team expanded with five additional senior leaders, including <EntityLink id="paul-christiano">Paul Christiano</EntityLink> (former OpenAI leader and founder of the nonprofit Alignment Research Center) as Head of AI Safety, Adam Russell (director of the Information Sciences Institute's AI Division at the University of Southern California) as Chief Vision Officer, Mara Campbell as acting chief operating officer and chief of staff, Rob Reich as senior advisor, and Mark Latonero as head of international engagement.
Minor issues90%Feb 22, 2026
Announced by Commerce Secretary Gina Raimondo, leaders joining the AISI include Paul Christiano, a former OpenAI leader and founder of the nonprofit Alignment Research Center, as head of AI safety; Adam Russell, director of the Information Sciences Institute’s AI Division at the University of Southern California, as chief vision officer; Mara Campbell, former deputy chief operating officer at Commerce’s Economic Development Administration, as acting chief operating officer and chief of staff; Rob Reich, professor of political science at Stanford University and associate director of the Institute for Human-Centered AI, as senior advisor; and Mark Latonero, former deputy director of the National AI Initiative Office at the White House Office of Science and Technology Policy as head of international engagement.

The source does not mention Elizabeth Kelly as director and Elham Tabassi as chief technology officer. The source is dated April 16, 2024, not February 2024.

| Head of AI Safety | Paul Christiano | Former OpenAI leader and founder of the Alignment Research Center, a nonprofit focused on AI alignment research |
Accurate100%Feb 22, 2026
Announced by Commerce Secretary Gina Raimondo, leaders joining the AISI include Paul Christiano, a former OpenAI leader and founder of the nonprofit Alignment Research Center, as head of AI safety
| Chief Vision Officer | Adam Russell | Director of the Information Sciences Institute's AI Division at the University of Southern California |
Accurate100%Feb 22, 2026
Announced by Commerce Secretary Gina Raimondo, leaders joining the AISI include Paul Christiano, a former OpenAI leader and founder of the nonprofit Alignment Research Center, as head of AI safety; Adam Russell, director of the Information Sciences Institute’s AI Division at the University of Southern California, as chief vision officer
+3 more claims
Claims (1)
In March 2025, NIST launched the AI Standards "Zero Drafts" Pilot Project, an innovative approach to accelerate standards development. The project creates preliminary stakeholder-driven drafts on topics like AI risk management, transparency, and procurement, which are then submitted to formal standards developing organizations (SDOs) for consensus development.
Accurate100%Feb 22, 2026
The National Institute of Standards and Technology (NIST) is seeking input on its newly launched pilot project, &ldquo;AI Standards Zero Drafts,&rdquo; which aims to expand participation in AI standards development and help standards developing organizations (SDOs) achieve consensus more quickly.
★★★★★
Claims (2)
Initiated in 2021 and released in January 2023, the framework addresses trustworthy AI attributes including validity, reliability, safety, security, and resilience. On July 26, 2024, NIST released NIST AI 600-1, the Generative AI Profile, which identifies unique risks posed by generative AI systems and proposes tailored management actions for organizations based on their goals and priorities.
Minor issues90%Feb 22, 2026
On July 26, 2024, NIST released NIST-AI- 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile . The profile can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities.

The claim mentions trustworthy AI attributes including validity, reliability, safety, security, and resilience, but the source does not explicitly list these attributes. The source only mentions that the framework is intended to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The source states the AI RMF was released on January 26, 2023, not just January 2023.

NIST's core AI mission focuses on promoting "trustworthy AI" through science-based standards and voluntary frameworks rather than regulation. The agency emphasizes that "safety breeds trust, trust enables adoption, and adoption accelerates innovation" as its guiding principle. This approach positions NIST as a coordinator between government, industry, and academia, creating consensus standards that organizations can voluntarily adopt.
Accurate100%Feb 22, 2026
The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Citation verification: 35 verified, 2 flagged, 8 unchecked of 68 total

Related Pages

Top Related Pages

Safety Research

Scalable Oversight

Approaches

Cooperative AI

Policy

AI Safety Institutes (AISIs)AI Standards Development

Concepts

Agentic AIGovernment Orgs Overview

Risks

Emergent Capabilities

Organizations

Alignment Research CenterUS AI Safety InstituteFrontier Model ForumGlobal Partnership on Artificial Intelligence (GPAI)METRCentre for Long-Term Resilience

Other

Elizabeth Kelly