Responsible Scaling Policies
in-effectInternationalResponsible Scaling Policies (RSPs) are voluntary commitments by AI labs to pause scaling when capability or safety thresholds are crossed. As of December 2025, 20 companies have published policies, though SaferAI grades the three major frameworks 1.9-2.2/5 for specificity.
Related Topics
Related Pages
Top Related Pages
METR
Model Evaluation and Threat Research conducts dangerous capability evaluations for frontier AI models, testing for autonomous replication, cybersec...
Anthropic
An AI safety company founded by former OpenAI researchers that develops frontier AI models while pursuing safety research, including the Claude mod...
OpenAI
Leading AI lab that developed GPT models and ChatGPT, analyzing organizational evolution from non-profit research to commercial AGI development ami...
Google DeepMind
Google's merged AI research lab behind AlphaGo, AlphaFold, and Gemini, formed from combining DeepMind and Google Brain in 2023 to compete with OpenAI
Corporate Influence on AI Policy
A comprehensive analysis of directly influencing frontier AI labs through working inside them, shareholder activism, whistleblowing, and transparen...
Safety Research
Quick Facts
- Introduced
- September 2023
- Status
- Active; 20+ companies participating
- Scope
- International
Sources
- Anthropic's Responsible Scaling PolicyAnthropic, September 2023
- OpenAI Preparedness FrameworkOpenAI, December 2023
- Google DeepMind Frontier Safety FrameworkGoogle DeepMind, May 2024