Back
OpenAI
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: OpenAI
Data Status
Not fetched
Cited by 24 pages
| Page | Type | Quality |
|---|---|---|
| Large Language Models | Capability | 60.0 |
| Large Language Models | Concept | 62.0 |
| AI Accident Risk Cruxes | Crux | 67.0 |
| AI Safety Solution Cruxes | Crux | 65.0 |
| AGI Timeline | Concept | 59.0 |
| Heavy Scaffolding / Agentic Systems | Concept | 57.0 |
| Capabilities-to-Safety Pipeline Model | Analysis | 73.0 |
| AI Capability Threshold Model | Analysis | 72.0 |
| AI Safety Defense in Depth Model | Analysis | 69.0 |
| AI Safety Intervention Effectiveness Matrix | Analysis | 73.0 |
| Mesa-Optimization Risk Analysis | Analysis | 61.0 |
| AI Risk Interaction Network Model | Analysis | 64.0 |
| AI Safety Research Value Model | Analysis | 60.0 |
| Center for AI Safety | Organization | 42.0 |
| Metaculus | Organization | 50.0 |
| Sam Altman | Person | 40.0 |
| Capability Elicitation | Approach | 91.0 |
| EU AI Act | Policy | 55.0 |
| RLHF | Capability | 63.0 |
| Technical AI Safety Research | Crux | 66.0 |
| AI-Driven Concentration of Power | Risk | 65.0 |
| AI Proliferation | Risk | 60.0 |
| AI Development Racing Dynamics | Risk | 72.0 |
| Sycophancy | Risk | 65.0 |
Resource ID:
04d39e8bd5d50dd5 | Stable ID: ZWE4ZTJlZW