Back
OpenAI Preparedness Framework
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: OpenAI
Data Status
Not fetched
Cited by 19 pages
| Page | Type | Quality |
|---|---|---|
| Large Language Models | Concept | 62.0 |
| Situational Awareness | Capability | 67.0 |
| AI Safety Technical Pathway Decomposition | Analysis | 62.0 |
| Apollo Research | Organization | 58.0 |
| Alignment Evaluations | Approach | 65.0 |
| Capability Elicitation | Approach | 91.0 |
| Dangerous Capability Evaluations | Approach | 64.0 |
| Evals-Based Deployment Gates | Policy | 66.0 |
| AI Evaluations | Safety Agenda | 72.0 |
| AI Evaluation | Approach | 72.0 |
| Third-Party Model Auditing | Approach | 64.0 |
| AI Safety Cases | Approach | 91.0 |
| Scheming & Deception Detection | Approach | 91.0 |
| Technical AI Safety Research | Crux | 66.0 |
| Deceptive Alignment | Risk | 75.0 |
| Mesa-Optimization | Risk | 63.0 |
| AI Capability Sandbagging | Risk | 67.0 |
| Scheming | Risk | 74.0 |
| Treacherous Turn | Risk | 67.0 |
Resource ID:
b3f335edccfc5333 | Stable ID: OTg2NjRiYz