The National Institute of Standards and Technology's role in developing AI standards, risk management frameworks, and safety guidelines for the United States.
Related Wiki Pages
Top Related Pages
OpenAI
Leading AI lab that developed GPT models and ChatGPT, analyzing organizational evolution from non-profit research to commercial AGI development ami...
Paul Christiano
Founder of ARC, creator of iterated amplification and AI safety via debate. Current risk assessment ~10-20% P(doom), AGI 2030s-2040s. Pioneered pro...
Elizabeth Kelly
AI Alignment
Technical approaches to ensuring AI systems pursue intended goals and remain aligned with human values throughout training and deployment. Current ...
NIST AI Risk Management Framework (AI RMF)
US federal voluntary framework for managing AI risks, with 40-60% Fortune 500 adoption and influence on federal policy through Executive Orders, bu...