Back
Responsible Scaling Policy
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Anthropic
Data Status
Not fetched
Cited by 14 pages
| Page | Type | Quality |
|---|---|---|
| Agentic AI | Capability | 68.0 |
| Should We Pause AI Development? | Crux | 47.0 |
| AI Uplift Assessment Model | Analysis | 70.0 |
| AI Capability Threshold Model | Analysis | 72.0 |
| AI Safety Culture Equilibrium Model | Analysis | 65.0 |
| AI Risk Warning Signs Model | Analysis | 70.0 |
| Anthropic | Organization | 74.0 |
| Alignment Research Center | Organization | 57.0 |
| Daniela Amodei | Person | 21.0 |
| Dario Amodei | Person | 41.0 |
| AI Alignment | Approach | 91.0 |
| Open Source AI Safety | Approach | 62.0 |
| Responsible Scaling Policies | Policy | 62.0 |
| Bioweapons Risk | Risk | 91.0 |
Cached Content Preview
HTTP 200Fetched Feb 26, 20267 KB
Announcements
# Anthropic's Responsible Scaling Policy
Sep 19, 2023

Today, we’re publishing our [Responsible Scaling Policy (RSP)](https://anthropic.com/responsible-scaling-policy) – a series of technical and organizational protocols that we’re adopting to help us manage the risks of developing increasingly capable AI systems.
As AI models become more capable, we believe that they will create major economic and social value, but will also present increasingly severe risks. Our RSP focuses on catastrophic risks – those where an AI model directly causes large scale devastation. Such risks can come from deliberate misuse of models (for example use by terrorists or state actors to create bioweapons) or from models that cause destruction by acting autonomously in ways contrary to the intent of their designers.

Our RSP defines a framework called AI Safety Levels (ASL) for addressing catastrophic risks, modeled loosely after the US government’s biosafety level (BSL) standards for handling of dangerous biological materials. The basic idea is to require safety, security, and operational standards appropriate to a model’s potential for catastrophic risk, with higher ASL levels requiring increasingly strict demonstrations of safety.
A very abbreviated summary of the ASL system is as follows:
- ASL-1 refers to systems which pose no meaningful catastrophic risk, for example a 2018 LLM or an AI system that only plays chess.
- ASL-2 refers to systems that show early signs of dangerous capabilities – for example ability to give instructions on how to build bioweapons – but where the information is not yet useful due to insufficient reliability or not providing information that e.g. a search engine couldn’t. Current LLMs, including Claude, appear to be ASL-2.
- ASL-3 refers to systems that substantially increase the risk of catastrophic misuse compared to non-AI baselines (e.g. search engines or textbooks) OR that show low-level autonomous capabilities.
- ASL-4 and higher (ASL-5+) is not yet defined as it is too far from present systems, but will likely involve qualitative escalations in catastrophic misuse potential and autonomy.
The definition, criteria, and safety measures for each ASL level are described in detail in the main document, but at a high level, ASL-2 measures represent our current safety and security standards and overlap significantly with our recent [White House commitments](https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage
... (truncated, 7 KB total)Resource ID:
394ea6d17701b621 | Stable ID: NGU0MjEyN2