Skip to content
Longterm Wiki
Updated 2026-03-16HistoryData
Page StatusDocumentationDashboard
Edited 4 weeks ago50 words1 backlinks
Content0/12
SummaryScheduleEntityEdit history
Tables0Diagrams0Int. links0/ ~5Ext. links0Footnotes0References0Quotes0Accuracy0Backlinks1

Entities & Pages

Master list of all entities in the system — organizations, people, concepts, models, and more. Shows entity type, aliases, FactBase fact counts, quality scores, and importance rankings. Use this to find entities that need pages, identify gaps in coverage, or check the overall entity inventory. Run pnpm crux fb list for the CLI equivalent.

Unified view of all 2000 entities: 565 have wiki pages, 511 have importance scores, 0 have coverage data, 0 have sourcing verdicts, 8 have resource links. Use preset buttons to switch views. The Source Check preset shows sourcing accuracy and coverage.

565 of 2000 entities
Entity / page title
Entity type
Quality score (0-100)
Reader importance (0-100)
Coverage: passing items out of 13
Entity data depth (1-4 from structured data scoring)
Hallucination risk level
Time since last update
Word count
Page category
AI Timelinesconcept9593--4mo6.5kmodels
Superintelligenceconcept9295--4mo1.6krisks
Existential Risk from AIconcept9295--4mo4.0krisks
AI Scaling Lawsconcept9293--4mo2.5kmodels
Long-Timelines Technical Worldviewconcept9115--2mo4.7kworldviews
Optimistic Alignment Worldviewconcept9183--2mo4.5kworldviews
US AI Safety Instituteorganization9132--6w4.8korganizations
US Executive Order on Safe, Secure, and Trustworthy AIpolicy9157--6w4.5kresponses
Voluntary AI Safety Commitmentspolicy9150--4mo4.6kresponses
AI Governance Coordination Technologiesapproach9170--4mo2.9kresponses
AI-Human Hybrid Systemsapproach9163--4mo2.4kresponses
AI Alignmentapproach9195--2mo5.7kresponses
Scheming & Deception Detectionapproach9158--2mo3.3kresponses
Capability Elicitationapproach9150--2mo3.5kresponses
AI Safety Casesapproach9151--2mo4.1kresponses
Weak-to-Strong Generalizationapproach9120--2mo2.9kresponses
AI Safety Intervention Portfolioapproach9161--2mo2.8kresponses
Compute Thresholdsconcept9156--2mo4.0kresponses
Pause Advocacyapproach9152--2mo5.3kresponses
International Coordination Mechanismsconcept9124--2mo4.1kresponses
Sparse Autoencoders (SAEs)approach9120--2mo3.2kresponses
Eliciting Latent Knowledge (ELK)approach9124--2mo2.5kresponses
Sandboxing / Containmentapproach9158--2mo4.3kresponses
Structured Access / API-Onlyapproach9179--2mo3.5kresponses
Tool-Use Restrictionsapproach9158--2mo3.9kresponses
Deepfake Detectionapproach9122--2mo2.9kresponses
AI Authoritarian Toolsrisk9118--4mo2.9krisks
Bioweapons Riskrisk9163--4mo10.8krisks
Cyberweapons Riskrisk9183--4mo4.2krisks
AI Distributional Shiftrisk9117--4mo3.6krisks
AI-Induced Enfeeblementrisk9177--4mo2.4krisks
Erosion of Human Agencyrisk9119--4mo1.8krisks
Multipolar Trap (AI Development)risk9184--4mo3.9krisks
Reward Hackingrisk9116--4mo4.0krisks
Scientific Knowledge Corruptionrisk9138--4mo1.9krisks
AI Model Steganographyrisk9170--2mo2.4krisks
AI-Enabled Untraceable Misuserisk8848--2mo2.8krisks
OpenAI Foundationorganization8787--6w9.0korganizations
AI Safety Multi-Actor Strategic Landscapeanalysis7963--2mo1.9kmodels
FTX Collapse: Lessons for EA Funding Resilienceconcept7865--2mo5.7korganizations
AI Compute Scaling Metricsanalysis7882--2mo3.5kmodels
Centre for Effective Altruismorganization7842--2mo2.0korganizations
Redwood Researchorganization7832--2mo1.5korganizations
Sleeper Agents: Training Deceptive LLMsrisk7817--2mo1.8krisks
FAR AIorganization7685--4mo3.2korganizations
OpenAI Foundation Governance Paradoxanalysis7540--2mo2.6korganizations
AI Controlresearch-area7569--6w3.1kresponses
State Capacity and AI Governanceconcept7572--2mo2.3kresponses
Deceptive Alignmentrisk7519--4mo2.0krisks
Relative Longtermist Value Comparisonsanalysis7468--2mo2.5kmodels
Rows per page:
Page 1 of 12