Skip to content
Longterm Wiki
Back

Tech Policy Press: Unpacking New NIST Guidance on AI

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: TechPolicy.Press

This article is useful for tracking US federal AI governance milestones; the NIST documents it covers are key voluntary frameworks shaping responsible AI development practices in the US.

Metadata

Importance: 52/100blog postanalysis

Summary

Tech Policy Press analyzes the suite of NIST publications released at the 270-day deadline of Biden's 2023 AI Executive Order, covering the finalized AI Risk Management Framework Generative AI Profile (NIST AI 600-1), secure software development guidance, AI standards reports, and a new testing platform called Dioptra. The article summarizes each document's scope, key risk categories, and voluntary guidelines aimed at improving AI safety, security, and trustworthiness. It serves as an accessible entry point for understanding the policy landscape emerging from the EO.

Key Points

  • NIST released final reports on generative AI risks, secure software, and AI standards as part of the Biden EO's 270-day deadline on July 26, 2024.
  • The AI RMF Generative AI Profile (NIST AI 600-1) provides 200+ voluntary actions across 12 risk categories for managing generative AI risks.
  • US AI Safety Institute published draft guidance to help software developers mitigate risks from generative AI and dual-use foundation models.
  • Dioptra, a new NIST testing platform, helps AI developers measure how adversarial attacks degrade AI system performance.
  • All guidance is voluntary, aimed at balancing innovation support with safety and trustworthiness improvements.

Cited by 1 page

PageTypeQuality
NIST and AI SafetyOrganization63.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202612 KB
Unpacking New NIST Guidance on Artificial Intelligence | TechPolicy.Press Unpacking New NIST Guidance on Artificial Intelligence

 Gabby Miller / Aug 2, 2024 

 President Joe Biden announcing his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence on Oct. 30, 2023. Source: The White House .

 Last Friday marked the 270-day deadline for a raft of publications released by the National Institute of Standards and Technology (NIST) as part of US President Joe Biden’s 2023 executive order on artificial intelligence. The publications provide voluntary guidelines for AI developers to “improve the safety, security and trustworthiness” of their systems and aim to mitigate generative AI-specific risks while continuing to support innovation.

 The down-to-the-wire rollout included final reports on generative AI , secure software , and AI standards , some of which are follow-ups to draft reports NIST released in the spring. NIST published two additional products, including a draft guidance document from the US AI Safety Institute (AISI) meant to help software developers mitigate risks stemming from generative AI and dual-use foundation models, as well as a novel testing platform, called Dioptra, to help AI system developers measure how certain attacks degrade their AI systems’ performance.

 Related Reading: 

 NIST Unveils Draft Guidance Reports Following Biden's AI Executive Order 
 Five Takeaways from the NIST AI Risk Management Framework 
 Reconciling Agile Development With AI Safety 
 Summaries of each NIST document or product published on July 26, 2024 can be found below. 

 Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile 

 The final version of NIST’s AI Risk Management Framework Generative AI Profile (RMF GAI, NIST AI 600-1) was published on Friday. The initial draft was publicly released in January 2023, and underwent several draft versions that took into account public comments, workshops, and other opportunities for feedback. It’s meant to be a companion resource to NIST’s more comprehensive Risk Management Framework (RMF), and help organizations identify and propose actions for generative AI risks. It provides more than 200 actions across twelve different risk categories for AI developers to consider when managing risks. In March, NIST launched the Trustworthy and Responsible AI Resource Center to implement, operationalize, and facilitate international alignment with the AI RMF.

 This twelve risk categories include:

 Chemical, Biological, Radiological, and Nuclear (CBRN) Information or Capabilities
 Confabulation, or “hallucination”
 Dangerous, Violent, or Hateful Content
 Data Privacy
 Environmental Impacts
 Harmful Bias and Homogenization
 Human-AI Configuration
 Information Integrity
 Information Security
 Intellectual Property
 Obscene, Degrading, and/or Abusive Content
 Value Chain and Component Integration
 NIST defines risk in this context a

... (truncated, 12 KB total)
Resource ID: 67542cf2228f2b60 | Stable ID: sid_nVgPkInnsY