Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: OpenAI

OpenAI's official institutional framework for catastrophic risk evaluation; relevant for understanding how leading AI labs operationalize safety policies and set deployment guardrails for frontier models.

Metadata

Importance: 72/100organizational reportprimary source

Summary

OpenAI's Preparedness Framework outlines a structured approach to evaluating and managing catastrophic risks from frontier AI models, including threats related to CBRN weapons, cyberattacks, and loss of human control. It defines risk severity thresholds and ties model deployment decisions to safety evaluations. The framework represents OpenAI's operational policy for responsible frontier model development.

Key Points

  • Defines 'Preparedness' as the function responsible for tracking, evaluating, and forecasting catastrophic risks from frontier AI models.
  • Establishes risk categories including CBRN (chemical, biological, radiological, nuclear), cybersecurity, model autonomy, and societal disruption.
  • Sets deployment thresholds: models rated 'critical' risk cannot be deployed; 'high' risk models require safeguards before release.
  • Introduces a Safety Advisory Group and oversight structure to review evaluations and recommend deployment decisions to leadership.
  • Represents a living policy document subject to revision as capabilities and understanding evolve.

Cited by 7 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 20268 KB
Our updated Preparedness Framework | OpenAI

 

 
 
 
 

 Feb
 MAR
 Apr
 

 
 

 
 18
 
 

 
 

 2025
 2026
 2027
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Save Page Now Outlinks

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20260318212156/https://openai.com/index/updating-our-preparedness-framework/

 

li:hover)>li:not(:hover)>*]:text-primary-60 flex h-full min-w-0 items-baseline gap-0 overflow-x-hidden whitespace-nowrap [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
Research

Safety

For Business

For Developers

ChatGPT(opens in a new window)

Sora

Codex

Stories

Company

News

Log in

Try ChatGPT

(opens in a new window)

Research

Safety

For Business

For Developers

ChatGPT

(opens in a new window)

Sora

Codex

Stories

Company

News

Try ChatGPT

(opens in a new window)Login

OpenAI

April 15, 2025
PublicationSafety

Our updated Preparedness Framework

Sharing our updated framework for measuring and protecting against severe harm from frontier AI capabilities.

Read full document

Share

We’re releasing an update to our Preparedness Framework, our process for tracking and preparing for advanced AI capabilities that could introduce new risks of severe harm. As our models continue to get more capable⁠, safety will increasingly depend on having the right real-world safeguards in place. 

This update introduces a sharper focus on the specific risks that matter most, stronger requirements for what it means to “sufficiently minimize” those risks in practice, and clearer operational guidance on how we evaluate, govern, and disclose our safeguards. Additionally, we introduce future-facing research categories that allow us to remain at the forefront of understanding emerging capabilities to keep pace with where the technology is headed. We will continue investing deeply in this process by making our preparedness work more actionable, rigorous, and transparent as the technology advances.

We’ve learned a great deal from our own testing, insights from external experts, and lessons from the field. This update reflects that progress. In line with our core safety principles⁠, it makes targeted improvements that include:

Clear criteria for prioritizing high-risk capabilities. We use a structured risk assessment process to evaluate whether a frontier capability could lead to severe harm and we assign it to a category based on defined criteria. We track capabilities that meet five key criteria that make it a priority for us to prepare in advance: the risk should be plausible, measurable, severe, net new, and instantaneous or irremediable. We measure progress on these capabilities, and build safeguards against the risks that these capabilities create.

Sharper capability categories. We've updated our categorization of capabilities to apply these cri

... (truncated, 8 KB total)
Resource ID: ded0b05862511312 | Stable ID: sid_jTSyHcoZ8U