Back
King & Spalding: NIST Releases Series of AI Guidelines, Software
webThis law firm client alert summarizes NIST's mid-2024 AI guidance releases tied to the Biden AI Executive Order, useful for tracking the evolution of U.S. federal AI safety and governance standards.
Metadata
Importance: 42/100news articlenews
Summary
King & Spalding's client alert summarizes NIST's July 2024 publication of three AI guidelines and a software package for measuring adversarial attack impacts on AI systems, all issued in response to President Biden's Executive Order on Safe, Secure, and Trustworthy AI. The alert provides legal analysis of these developments for organizations subject to federal AI compliance requirements.
Key Points
- •NIST published three new AI guidelines in July 2024 as part of ongoing implementation of Biden's AI Executive Order.
- •NIST released a software package to help organizations measure the impact of adversarial attacks on AI system performance.
- •The announcement came at the 270-day mark following the AI Executive Order, reflecting a structured compliance timeline.
- •King & Spalding frames this as a client alert, highlighting legal and compliance implications for organizations deploying AI systems.
- •These actions represent a significant expansion of the federal AI governance and risk management framework under NIST's purview.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| NIST and AI Safety | Organization | 63.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20267 KB
NIST Releases Series of AI Guidelines & Software in Ongoing Response to AI Executive Order - King & Spalding
News back
News
All News
Cases & Deals
In the News
Press Releases
Recognitions
Events back
Events
All Events
Conferences
Speaking Engagements
Webinars
Insights back
Insights
All Insights
Newsletters
Client Alerts
Thought Leadership
Articles
Feature
BLOGS
Auditor Liability Bulletin
News & Insights
Subscribe
Client Alert
August 12, 2024
NIST Releases Series of AI Guidelines & Software in Ongoing Response to AI Executive Order
The U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) recently announced the publication of three AI guidelines as well as its release of a software package aimed at helping organizations measure the impact of adversarial attacks on AI system performance. These actions are all in response to President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence , published on October 23, 2023.
NIST & President Biden’s AI Executive Order
President Biden’s AI Executive Order was published with an accompanying Fact Sheet, which included action items for the various departments and agencies falling under the executive branch. Fact Sheet spotlighted, among other things, the need to create new standards for AI safety and security, and it did so specifically in relation to NIST:
Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy;
Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
Since then, NIST has announced its continued and ongoing efforts to work with private and public stakeholders to fulfill these obligations. As stated by Under Secretary of Commerce for Standards and Technology and NIST Director Laurie Locascio: “We are committed to developing meaningful evaluation guidelines, testing environments, and information resources to help organizations develop, deploy, and use AI technologies that are safe and secure, and that enhance AI trustworthiness.”
For example, NIST kicked off these efforts by hosting a workshop in November 2023 to facilitate collaboration efforts, whereby inviting private and public stakeholders to begin the process of identifying working groups for the various deliverables required under the AI Executive Order. These meetings served to path mark the development of NIST’s recently published guidelines.
NIST’s New AI Guidelines
Building on the AI Risk-Management Framework (“AI RMF”) published by NIST in July 2024, NIST collaborated with private and public stakeholders, including
... (truncated, 7 KB total)Resource ID:
785f614a7ae5b13e | Stable ID: sid_pIyM7fNs69