Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusDocumentationDashboard
Edited 1 day ago
Content0/12
LLM summaryScheduleEntityEdit history
Tables0Diagrams0Int. links0/ ~5Ext. links0Footnotes0References0Quotes0Accuracy0

Entities

Unified view of all 734 entities: 513 have wiki pages, 512 have importance scores, 513 have coverage data. Use preset buttons to switch between views (Overview, Entities, Importance, Quality, Coverage, Citations, Updates) or toggle individual columns.

734 entities
Entity / page title
Entity type
Quality score (0-100)
Reader importance (0-100)
Coverage: passing items out of 13
Hallucination risk level
Time since last update
Word count
Page category
AI Timelinesconcept95936/13medium1d6.5kmodels
Superintelligenceconcept92953/13medium1d1.6krisks
Existential Risk from AIconcept92954/13medium1d1.2krisks
AI Scaling Lawsconcept92936/13medium1d2.5kmodels
Long-Timelines Technical Worldviewconcept91156/13medium1d4.7kworldviews
Optimistic Alignment Worldviewconcept91837/13medium1d4.4kworldviews
US AI Safety Instituteorganization91324/13high1d4.8korganizations
US Executive Order on Safe, Secure, and Trustworthy AIpolicy91577/13medium1d4.5kresponses
Voluntary AI Safety Commitmentspolicy91505/13medium1d4.6kresponses
AI Governance Coordination Technologiesapproach91709/13low1d2.9kresponses
AI-Human Hybrid Systemsapproach91639/13medium1d2.4kresponses
AI Alignmentapproach919510/13medium1d5.7kresponses
Scheming & Deception Detectionapproach91588/13low1d3.3kresponses
Capability Elicitationapproach91508/13low1d3.5kresponses
AI Safety Casesapproach91517/13low1d4.1kresponses
Weak-to-Strong Generalizationapproach91207/13medium1d2.9kresponses
AI Safety Intervention Portfolioapproach916110/13low1d2.8kresponses
Compute Thresholdspolicy91567/13medium1d4.0kresponses
Pause Advocacyapproach91527/13medium1d5.3kresponses
International Coordination Mechanismspolicy91245/13medium1d4.1kresponses
Sparse Autoencoders (SAEs)approach91207/13low1d3.2kresponses
Eliciting Latent Knowledge (ELK)approach91247/13low1d2.5kresponses
Sandboxing / Containmentapproach91587/13low1d4.3kresponses
Structured Access / API-Onlyapproach91797/13low1d3.5kresponses
Tool-Use Restrictionsapproach91588/13medium1d3.9kresponses
Deepfake Detectionapproach91227/13low1d2.9kresponses
AI Authoritarian Toolsrisk91188/13medium1d2.9krisks
Bioweapons Riskrisk91638/13medium1d10.8krisks
Cyberweapons Riskrisk91837/13medium1d4.2krisks
AI Distributional Shiftrisk91176/13medium1d3.6krisks
AI-Induced Enfeeblementrisk91779/13medium1d2.4krisks
Erosion of Human Agencyrisk91199/13medium1d1.8krisks
Multipolar Trap (AI Development)risk91845/13medium1d3.9krisks
Reward Hackingrisk91166/13medium1d4.0krisks
Scientific Knowledge Corruptionrisk91389/13medium1d1.9krisks
AI Model Steganographyrisk91709/13medium1d2.4krisks
AI-Enabled Untraceable Misuserisk88485/13medium1d2.8krisks
OpenAI Foundationorganization87877/13medium1d9.0korganizations
FTX Collapse: Lessons for EA Funding Resilienceconcept78656/13high1d5.7korganizations
AI Compute Scaling Metricsanalysis78825/13medium1d3.5kmodels
Centre for Effective Altruismorganization78425/13high1d2.0korganizations
Redwood Researchorganization78327/13medium1d1.5korganizations
Sleeper Agents: Training Deceptive LLMsrisk78176/13medium1d1.8krisks
FAR AIorganization76858/13high1d3.3korganizations
OpenAI Foundation Governance Paradoxanalysis75406/13medium1d2.6korganizations
AI Controlsafety-agenda75698/13low1d3.1kresponses
State Capacity and AI Governanceconcept75725/13medium1d2.2kresponses
Deceptive Alignmentrisk75199/13medium1d2.0krisks
Relative Longtermist Value Comparisonsanalysis74686/13medium1d2.4kmodels
Anthropicorganization74528/13high1d5.1korganizations
Rows per page:
Page 1 of 15