Entities
Unified view of all 727 entities: 513 have wiki pages, 512 have importance scores, 513 have coverage data. Use preset buttons to switch between views (Overview, Entities, Importance, Quality, Coverage, Citations, Updates) or toggle individual columns.
727 entities
Entity / page title | Entity type | Quality score (0-100) | Reader importance (0-100) | Coverage: passing items out of 13 | Hallucination risk level | Time since last update | Word count | Page category |
|---|---|---|---|---|---|---|---|---|
| AI Timelines | concept | 95 | 93 | 2/13 | medium | 0d | 6.5k | models |
| Superintelligence | concept | 92 | 95 | 3/13 | medium | 1d | 1.6k | risks |
| Existential Risk from AI | concept | 92 | 95 | 3/13 | medium | 1d | 1.2k | risks |
| AI Scaling Laws | concept | 92 | 93 | 4/13 | medium | 1d | 2.5k | models |
| Long-Timelines Technical Worldview | concept | 91 | 15 | 5/13 | medium | 1d | 4.7k | worldviews |
| Optimistic Alignment Worldview | concept | 91 | 83 | 5/13 | medium | 1d | 4.4k | worldviews |
| US AI Safety Institute | organization | 91 | 32 | 3/13 | high | 1d | 4.8k | organizations |
| US Executive Order on Safe, Secure, and Trustworthy AI | policy | 91 | 57 | 6/13 | medium | 1d | 4.5k | responses |
| Voluntary AI Safety Commitments | policy | 91 | 50 | 4/13 | medium | 1d | 4.6k | responses |
| AI Governance Coordination Technologies | approach | 91 | 70 | 8/13 | low | 0d | 2.9k | responses |
| AI-Human Hybrid Systems | approach | 91 | 63 | 8/13 | medium | 0d | 2.4k | responses |
| AI Alignment | approach | 91 | 95 | 7/13 | medium | 0d | 5.7k | responses |
| Scheming & Deception Detection | approach | 91 | 58 | 7/13 | low | 0d | 3.3k | responses |
| Capability Elicitation | approach | 91 | 50 | 7/13 | low | 0d | 3.5k | responses |
| AI Safety Cases | approach | 91 | 51 | 6/13 | low | 0d | 4.1k | responses |
| Weak-to-Strong Generalization | approach | 91 | 20 | 7/13 | medium | 0d | 2.9k | responses |
| AI Safety Intervention Portfolio | approach | 91 | 61 | 8/13 | low | 1d | 2.8k | responses |
| Compute Thresholds | policy | 91 | 56 | 6/13 | medium | 0d | 4.0k | responses |
| Pause Advocacy | approach | 91 | 52 | 6/13 | medium | 1d | 5.3k | responses |
| International Coordination Mechanisms | policy | 91 | 24 | 5/13 | medium | 1d | 4.1k | responses |
| Sparse Autoencoders (SAEs) | approach | 91 | 20 | 7/13 | low | 0d | 3.2k | responses |
| Eliciting Latent Knowledge (ELK) | approach | 91 | 24 | 7/13 | low | 1d | 2.5k | responses |
| Sandboxing / Containment | approach | 91 | 58 | 7/13 | low | 0d | 4.3k | responses |
| Structured Access / API-Only | approach | 91 | 79 | 7/13 | low | 0d | 3.5k | responses |
| Tool-Use Restrictions | approach | 91 | 58 | 7/13 | medium | 0d | 3.9k | responses |
| Deepfake Detection | approach | 91 | 22 | 7/13 | low | 0d | 2.9k | responses |
| AI Authoritarian Tools | risk | 91 | 18 | 7/13 | medium | 0d | 2.9k | risks |
| Bioweapons Risk | risk | 91 | 63 | 6/13 | medium | 1d | 10.8k | risks |
| Cyberweapons Risk | risk | 91 | 83 | 6/13 | medium | 0d | 4.2k | risks |
| AI Distributional Shift | risk | 91 | 17 | 5/13 | medium | 0d | 3.6k | risks |
| AI-Induced Enfeeblement | risk | 91 | 77 | 8/13 | medium | 1d | 2.4k | risks |
| Erosion of Human Agency | risk | 91 | 19 | 8/13 | medium | 0d | 1.8k | risks |
| Multipolar Trap (AI Development) | risk | 91 | 84 | 4/13 | medium | 1d | 3.9k | risks |
| Reward Hacking | risk | 91 | 16 | 5/13 | medium | 0d | 4.0k | risks |
| Scientific Knowledge Corruption | risk | 91 | 38 | 8/13 | medium | 0d | 1.9k | risks |
| AI Model Steganography | risk | 91 | 70 | 8/13 | medium | 1d | 2.4k | risks |
| AI-Enabled Untraceable Misuse | risk | 88 | 48 | 4/13 | medium | 0d | 2.8k | risks |
| OpenAI Foundation | organization | 87 | 87 | 5/13 | medium | 1d | 9.0k | organizations |
| FTX Collapse: Lessons for EA Funding Resilience | concept | 78 | 65 | 4/13 | high | 1d | 5.7k | organizations |
| AI Compute Scaling Metrics | analysis | 78 | 82 | 4/13 | medium | 0d | 3.5k | models |
| Centre for Effective Altruism | organization | 78 | 42 | 4/13 | high | 1d | 2.0k | organizations |
| Redwood Research | organization | 78 | 32 | 6/13 | medium | 1d | 1.5k | organizations |
| Sleeper Agents: Training Deceptive LLMs | risk | 78 | 17 | 4/13 | medium | 0d | 1.8k | risks |
| FAR AI | organization | 76 | 85 | 6/13 | high | 0d | 3.3k | organizations |
| OpenAI Foundation Governance Paradox | analysis | 75 | 40 | 5/13 | medium | 1d | 2.6k | organizations |
| AI Control | safety-agenda | 75 | 69 | 7/13 | low | 0d | 3.1k | responses |
| State Capacity and AI Governance | concept | 75 | 72 | 4/13 | medium | 1d | 2.2k | responses |
| Deceptive Alignment | risk | 75 | 19 | 8/13 | medium | 1d | 2.0k | risks |
| Relative Longtermist Value Comparisons | analysis | 74 | 68 | 5/13 | medium | 1d | 2.4k | models |
| Anthropic | organization | 74 | 52 | 6/13 | high | 0d | 5.1k | organizations |
Rows per page:
Page 1 of 15