Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusDocumentationDashboard
Edited today
Content0/12
LLM summaryScheduleEntityEdit history
Tables0Diagrams0Int. links0/ ~5Ext. links0Footnotes0References0Quotes0Accuracy0

Entities

Unified view of all 727 entities: 513 have wiki pages, 512 have importance scores, 513 have coverage data. Use preset buttons to switch between views (Overview, Entities, Importance, Quality, Coverage, Citations, Updates) or toggle individual columns.

727 entities
Entity / page title
Entity type
Quality score (0-100)
Reader importance (0-100)
Coverage: passing items out of 13
Hallucination risk level
Time since last update
Word count
Page category
AI Timelinesconcept95932/13medium0d6.5kmodels
Superintelligenceconcept92953/13medium1d1.6krisks
Existential Risk from AIconcept92953/13medium1d1.2krisks
AI Scaling Lawsconcept92934/13medium1d2.5kmodels
Long-Timelines Technical Worldviewconcept91155/13medium1d4.7kworldviews
Optimistic Alignment Worldviewconcept91835/13medium1d4.4kworldviews
US AI Safety Instituteorganization91323/13high1d4.8korganizations
US Executive Order on Safe, Secure, and Trustworthy AIpolicy91576/13medium1d4.5kresponses
Voluntary AI Safety Commitmentspolicy91504/13medium1d4.6kresponses
AI Governance Coordination Technologiesapproach91708/13low0d2.9kresponses
AI-Human Hybrid Systemsapproach91638/13medium0d2.4kresponses
AI Alignmentapproach91957/13medium0d5.7kresponses
Scheming & Deception Detectionapproach91587/13low0d3.3kresponses
Capability Elicitationapproach91507/13low0d3.5kresponses
AI Safety Casesapproach91516/13low0d4.1kresponses
Weak-to-Strong Generalizationapproach91207/13medium0d2.9kresponses
AI Safety Intervention Portfolioapproach91618/13low1d2.8kresponses
Compute Thresholdspolicy91566/13medium0d4.0kresponses
Pause Advocacyapproach91526/13medium1d5.3kresponses
International Coordination Mechanismspolicy91245/13medium1d4.1kresponses
Sparse Autoencoders (SAEs)approach91207/13low0d3.2kresponses
Eliciting Latent Knowledge (ELK)approach91247/13low1d2.5kresponses
Sandboxing / Containmentapproach91587/13low0d4.3kresponses
Structured Access / API-Onlyapproach91797/13low0d3.5kresponses
Tool-Use Restrictionsapproach91587/13medium0d3.9kresponses
Deepfake Detectionapproach91227/13low0d2.9kresponses
AI Authoritarian Toolsrisk91187/13medium0d2.9krisks
Bioweapons Riskrisk91636/13medium1d10.8krisks
Cyberweapons Riskrisk91836/13medium0d4.2krisks
AI Distributional Shiftrisk91175/13medium0d3.6krisks
AI-Induced Enfeeblementrisk91778/13medium1d2.4krisks
Erosion of Human Agencyrisk91198/13medium0d1.8krisks
Multipolar Trap (AI Development)risk91844/13medium1d3.9krisks
Reward Hackingrisk91165/13medium0d4.0krisks
Scientific Knowledge Corruptionrisk91388/13medium0d1.9krisks
AI Model Steganographyrisk91708/13medium1d2.4krisks
AI-Enabled Untraceable Misuserisk88484/13medium0d2.8krisks
OpenAI Foundationorganization87875/13medium1d9.0korganizations
FTX Collapse: Lessons for EA Funding Resilienceconcept78654/13high1d5.7korganizations
AI Compute Scaling Metricsanalysis78824/13medium0d3.5kmodels
Centre for Effective Altruismorganization78424/13high1d2.0korganizations
Redwood Researchorganization78326/13medium1d1.5korganizations
Sleeper Agents: Training Deceptive LLMsrisk78174/13medium0d1.8krisks
FAR AIorganization76856/13high0d3.3korganizations
OpenAI Foundation Governance Paradoxanalysis75405/13medium1d2.6korganizations
AI Controlsafety-agenda75697/13low0d3.1kresponses
State Capacity and AI Governanceconcept75724/13medium1d2.2kresponses
Deceptive Alignmentrisk75198/13medium1d2.0krisks
Relative Longtermist Value Comparisonsanalysis74685/13medium1d2.4kmodels
Anthropicorganization74526/13high0d5.1korganizations
Rows per page:
Page 1 of 15