Entities & Pages
Master list of all entities in the system — organizations, people, concepts, models, and more. Shows entity type, aliases, FactBase fact counts, quality scores, and importance rankings. Use this to find entities that need pages, identify gaps in coverage, or check the overall entity inventory. Run pnpm crux fb list for the CLI equivalent.
Unified view of all 2000 entities: 565 have wiki pages, 511 have importance scores, 0 have coverage data, 0 have sourcing verdicts, 8 have resource links. Use preset buttons to switch views. The Source Check preset shows sourcing accuracy and coverage.
565 of 2000 entities
Entity / page title | Entity type | Quality score (0-100) | Reader importance (0-100) | Coverage: passing items out of 13 | Entity data depth (1-4 from structured data scoring) | Hallucination risk level | Time since last update | Word count | Page category |
|---|---|---|---|---|---|---|---|---|---|
| AI Timelines | concept | 95 | 93 | - | - | 4mo | 6.5k | models | |
| Superintelligence | concept | 92 | 95 | - | - | 4mo | 1.6k | risks | |
| Existential Risk from AI | concept | 92 | 95 | - | - | 4mo | 4.0k | risks | |
| AI Scaling Laws | concept | 92 | 93 | - | - | 4mo | 2.5k | models | |
| Long-Timelines Technical Worldview | concept | 91 | 15 | - | - | 2mo | 4.7k | worldviews | |
| Optimistic Alignment Worldview | concept | 91 | 83 | - | - | 2mo | 4.5k | worldviews | |
| US AI Safety Institute | organization | 91 | 32 | - | - | 6w | 4.8k | organizations | |
| US Executive Order on Safe, Secure, and Trustworthy AI | policy | 91 | 57 | - | - | 6w | 4.5k | responses | |
| Voluntary AI Safety Commitments | policy | 91 | 50 | - | - | 4mo | 4.6k | responses | |
| AI Governance Coordination Technologies | approach | 91 | 70 | - | - | 4mo | 2.9k | responses | |
| AI-Human Hybrid Systems | approach | 91 | 63 | - | - | 4mo | 2.4k | responses | |
| AI Alignment | approach | 91 | 95 | - | - | 2mo | 5.7k | responses | |
| Scheming & Deception Detection | approach | 91 | 58 | - | - | 2mo | 3.3k | responses | |
| Capability Elicitation | approach | 91 | 50 | - | - | 2mo | 3.5k | responses | |
| AI Safety Cases | approach | 91 | 51 | - | - | 2mo | 4.1k | responses | |
| Weak-to-Strong Generalization | approach | 91 | 20 | - | - | 2mo | 2.9k | responses | |
| AI Safety Intervention Portfolio | approach | 91 | 61 | - | - | 2mo | 2.8k | responses | |
| Compute Thresholds | concept | 91 | 56 | - | - | 2mo | 4.0k | responses | |
| Pause Advocacy | approach | 91 | 52 | - | - | 2mo | 5.3k | responses | |
| International Coordination Mechanisms | concept | 91 | 24 | - | - | 2mo | 4.1k | responses | |
| Sparse Autoencoders (SAEs) | approach | 91 | 20 | - | - | 2mo | 3.2k | responses | |
| Eliciting Latent Knowledge (ELK) | approach | 91 | 24 | - | - | 2mo | 2.5k | responses | |
| Sandboxing / Containment | approach | 91 | 58 | - | - | 2mo | 4.3k | responses | |
| Structured Access / API-Only | approach | 91 | 79 | - | - | 2mo | 3.5k | responses | |
| Tool-Use Restrictions | approach | 91 | 58 | - | - | 2mo | 3.9k | responses | |
| Deepfake Detection | approach | 91 | 22 | - | - | 2mo | 2.9k | responses | |
| AI Authoritarian Tools | risk | 91 | 18 | - | - | 4mo | 2.9k | risks | |
| Bioweapons Risk | risk | 91 | 63 | - | - | 4mo | 10.8k | risks | |
| Cyberweapons Risk | risk | 91 | 83 | - | - | 4mo | 4.2k | risks | |
| AI Distributional Shift | risk | 91 | 17 | - | - | 4mo | 3.6k | risks | |
| AI-Induced Enfeeblement | risk | 91 | 77 | - | - | 4mo | 2.4k | risks | |
| Erosion of Human Agency | risk | 91 | 19 | - | - | 4mo | 1.8k | risks | |
| Multipolar Trap (AI Development) | risk | 91 | 84 | - | - | 4mo | 3.9k | risks | |
| Reward Hacking | risk | 91 | 16 | - | - | 4mo | 4.0k | risks | |
| Scientific Knowledge Corruption | risk | 91 | 38 | - | - | 4mo | 1.9k | risks | |
| AI Model Steganography | risk | 91 | 70 | - | - | 2mo | 2.4k | risks | |
| AI-Enabled Untraceable Misuse | risk | 88 | 48 | - | - | 2mo | 2.8k | risks | |
| OpenAI Foundation | organization | 87 | 87 | - | - | 6w | 9.0k | organizations | |
| AI Safety Multi-Actor Strategic Landscape | analysis | 79 | 63 | - | - | 2mo | 1.9k | models | |
| FTX Collapse: Lessons for EA Funding Resilience | concept | 78 | 65 | - | - | 2mo | 5.7k | organizations | |
| AI Compute Scaling Metrics | analysis | 78 | 82 | - | - | 2mo | 3.5k | models | |
| Centre for Effective Altruism | organization | 78 | 42 | - | - | 2mo | 2.0k | organizations | |
| Redwood Research | organization | 78 | 32 | - | - | 2mo | 1.5k | organizations | |
| Sleeper Agents: Training Deceptive LLMs | risk | 78 | 17 | - | - | 2mo | 1.8k | risks | |
| FAR AI | organization | 76 | 85 | - | - | 4mo | 3.2k | organizations | |
| OpenAI Foundation Governance Paradox | analysis | 75 | 40 | - | - | 2mo | 2.6k | organizations | |
| AI Control | research-area | 75 | 69 | - | - | 6w | 3.1k | responses | |
| State Capacity and AI Governance | concept | 75 | 72 | - | - | 2mo | 2.3k | responses | |
| Deceptive Alignment | risk | 75 | 19 | - | - | 4mo | 2.0k | risks | |
| Relative Longtermist Value Comparisons | analysis | 74 | 68 | - | - | 2mo | 2.5k | models |
Rows per page:
Page 1 of 12