NIST AI Risk Management Framework (AI RMF)
in-effectFederalThe NIST AI Risk Management Framework (AI RMF) is a voluntary guidance document developed by the National Institute of Standards and Technology to help organizations manage risks associated with AI systems.
Related Wiki Pages
Top Related Pages
California SB 53
California's Transparency in Frontier Artificial Intelligence Act, the first U.S. state law regulating frontier AI models through transparency requ...
AI Standards Development
International and national organizations developing AI technical standards that create compliance pathways for regulations, influence procurement p...
NIST and AI Safety
The National Institute of Standards and Technology's role in developing AI standards, risk management frameworks, and safety guidelines for the Uni...
AI Governance & Policy (Overview)
Overview of governance approaches to AI safety—legislation, compute governance, international coordination, and industry self-regulation—with enfor...
Corporate AI Safety Responses
How major AI companies are responding to safety concerns through internal policies, responsible scaling frameworks, safety teams, and disclosure pr...
Quick Facts
- Jurisdiction
- United States
- Author / Sponsor
- NIST
- Introduced
- Jan 2023
- Status
- in-effect
- Scope
- Federal