NIST AI Risk Management Framework (AI RMF)
FederalThe NIST AI Risk Management Framework (AI RMF) is a voluntary guidance document developed by the National Institute of Standards and Technology to help organizations manage risks associated with AI systems.
Related Pages
Top Related Pages
AI Standards Development
International and national organizations developing AI technical standards that create compliance pathways for regulations, influence procurement p...
NIST and AI Safety
The National Institute of Standards and Technology's role in developing AI standards, risk management frameworks, and safety guidelines for the Uni...
Corporate AI Safety Responses
How major AI companies are responding to safety concerns through internal policies, responsible scaling frameworks, safety teams, and disclosure pr...
AI-Powered Fraud
AI enables automated fraud at scale through voice cloning, personalized phishing, and deepfake video calls. FBI-reported losses reached \$16.6B in ...
AI Risk Activation Timeline Model
A systematic framework mapping when different AI risks become critical based on capability thresholds, deployment contexts, and barrier erosion. Ma...
Organizations
Quick Facts
- Author / Sponsor
- NIST
- Introduced
- January 2023
- Status
- Voluntary guidance; mandatory for federal agencies under EO 14110
- Scope
- Federal
Sources
- AI Risk Management FrameworkNIST
- AI RMF PlaybookNIST
- Generative AI Profile (AI 600-1)NIST, July 2024