Voluntary AI Safety Commitments
in-effectInternationalIn July 2023, the White House secured voluntary commitments from leading AI companies on safety, security, and trust. These commitments represent the first coordinated industry-wide AI safety pledges, establishing baseline practices for frontier AI development.
Related Legislation
| Name | Status |
|---|---|
| International AI Safety Summit Series | in-effect |
| US Executive Order on Safe, Secure, and Trustworthy AI | revoked |
Related Topics
Related Pages
Top Related Pages
US Executive Order on Safe, Secure, and Trustworthy AI
Executive Order 14110 (October 2023) placed 150 requirements on 50+ federal entities, established compute-based reporting thresholds (10^26 FLOP fo...
International AI Safety Summit Series
Global diplomatic initiatives bringing together 28+ countries and major AI companies to establish international coordination on AI safety, producin...
Anthropic
An AI safety company founded by former OpenAI researchers that develops frontier AI models while pursuing safety research, including the Claude mod...
OpenAI
Leading AI lab that developed GPT models and ChatGPT, analyzing organizational evolution from non-profit research to commercial AGI development ami...
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
Proposed state legislation for frontier AI safety requirements (vetoed)
Organizations
Safety Research
Historical
Quick Facts
- Author / Sponsor
- White House
- Introduced
- July 2023
- Status
- Active; 16+ companies participating with varying compliance (53% mean)
- Scope
- International
Position Summary
Sources
- White House Fact Sheet: Voluntary AI CommitmentsJuly 21, 2023
- Anthropic's Responsible Scaling PolicySeptember 2023
- OpenAI Preparedness FrameworkDecember 2023
- Google DeepMind Frontier Safety FrameworkMay 2024
- Bletchley DeclarationNovember 2023
- Analysis: Are Voluntary AI Commitments Enough?GovAI, 2024