Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusDocumentationDashboard
Edited today
Content0/12
LLM summaryScheduleEntityEdit history
Tables0Diagrams0Int. links0/ ~5Ext. links0Footnotes0References0Quotes0Accuracy0

Update Schedule

Pages ranked by update priority. Priority is calculated as staleness (days since edit / update frequency) weighted by importance. 0 pages are overdue.

539 results
Anthropic-Pentagon Standoff (2026)Weekly1d ago2d780.28
OpenAIWeekly1d ago2d720.27
OpenAI FoundationWeekly1d ago6d870.13
Google DeepMindWeekly1d ago2d350.12
Frontier AI Company Comparison (2026)Weekly1d ago6d670.11
Meta AI (FAIR)Weekly1d ago2d300.10
xAIWeekly1d ago2d300.10
Microsoft AIWeekly1d ago6d430.09
US Executive Order on Safe, Secure, and Trustworthy AIWeekly1d ago6d570.08
Anthropic StakeholdersBiweekly1d ago13d850.06
OpenAI Foundation Governance ParadoxWeekly1d ago6d400.06
Safe Superintelligence Inc (SSI)Weekly1d ago6d320.05
Dense Transformers3 weeks1d ago3w800.04
World Models + Planning3 weeks1d ago3w750.04
CAIS (Center for AI Safety)3 weeks1d ago3w890.04
Frontier Model Forum3 weeks1d ago3w840.04
Leading the Future super PAC3 weeks1d ago3w800.04
METR3 weeks1d ago3w840.04
Musk v. OpenAI LawsuitWeekly1d ago6d290.04
NIST and AI Safety3 weeks1d ago3w770.04
Palisade Research3 weeks1d ago3w880.04
Secure AI Project3 weeks1d ago3w820.04
Seldon Lab3 weeks1d ago3w860.04
Elon Musk (AI Industry)Weekly1d ago6d280.04
Jan Leike3 weeks1d ago3w820.04
Sam AltmanWeekly1d ago6d270.04
Third-Party Model Auditing3 weeks1d ago3w770.04
Pause / Moratorium3 weeks1d ago3w790.04
AGI Timeline3 weeks1d ago3w560.03
CHAI (Center for Human-Compatible AI)3 weeks1d ago3w690.03
Anthropic Core Views3 weeks1d ago3w530.03
Bletchley Declaration3 weeks1d ago3w530.03
California SB 1047Weekly1d ago6d230.03
Dangerous Capability Evaluations3 weeks1d ago3w710.03
AI Governance and Policy3 weeks1d ago3w650.03
International Compute Regimes3 weeks1d ago3w630.03
AI Safety Intervention Portfolio3 weeks1d ago3w610.03
AI Alignment Research Agenda Comparison3 weeks1d ago3w580.03
Seoul AI Safety Summit Declaration3 weeks1d ago3w570.03
Bioweapons3 weeks1d ago3w630.03
Self-Improvement and Recursive Enhancement3 weeks1d ago3w470.02
AI Accident Risk Cruxes6 weeks1d ago6w940.02
Open vs Closed Source AI3 weeks1d ago3w520.02
AGI Development3 weeks1d ago3w500.02
Provable / Guaranteed Safe AI6 weeks1d ago6w890.02
80,000 Hours3 weeks1d ago3w510.02
ARC (Alignment Research Center)3 weeks1d ago3w390.02
Conjecture3 weeks1d ago3w360.02
ControlAI3 weeks1d ago3w420.02
CSER (Centre for the Study of Existential Risk)3 weeks1d ago3w360.02
CSET (Center for Security and Emerging Technology)3 weeks1d ago3w340.02
EA Global6 weeks1d ago6w780.02
EA Shareholder Diversification from AnthropicMonthly1d ago4w640.02
Epoch AI6 weeks1d ago6w880.02
Future of Humanity Institute (FHI)3 weeks1d ago3w510.02
Future of Life Institute (FLI)6 weeks1d ago6w760.02
Long-Term Benefit Trust (Anthropic)6 weeks1d ago6w780.02
MATS ML Alignment Theory Scholars program3 weeks1d ago3w320.02
MIRI (Machine Intelligence Research Institute)3 weeks1d ago3w320.02
Redwood Research3 weeks1d ago3w320.02
UK AI Safety Institute3 weeks1d ago3w320.02
US AI Safety Institute3 weeks1d ago3w320.02
Chris Olah6 weeks1d ago6w790.02
Eliezer Yudkowsky6 weeks1d ago6w820.02
Ilya Sutskever3 weeks1d ago3w340.02
Max Tegmark6 weeks1d ago6w820.02
Nick Bostrom6 weeks1d ago6w820.02
Nuño Sempere6 weeks1d ago6w830.02
Is EA Biosecurity Work Limited to Restricting LLM Biological Use?3 weeks1d ago3w400.02
GrokipediaMonthly1d ago4w290.02
Mechanistic Interpretability3 weeks1d ago3w400.02
Responsible Scaling Policies3 weeks1d ago3w510.02
Sleeper Agent Detection3 weeks1d ago3w510.02
Timelines Wiki6 weeks1d ago6w780.02
Voluntary Industry Commitments3 weeks1d ago3w500.02
Compute ConcentrationMonthly1d ago4w580.02
AI-Induced Enfeeblement6 weeks1d ago6w770.02
AI-Induced Irreversibility6 weeks1d ago6w770.02
Multipolar Trap (AI Development)6 weeks1d ago6w840.02
Scheming6 weeks1d ago6w710.02
AI Model Steganography6 weeks1d ago6w700.02
Optimistic Alignment Worldview6 weeks1d ago6w830.02
The Case AGAINST AI Existential RiskQuarterly1d ago13w900.01
The Case FOR AI Existential RiskQuarterly1d ago13w530.01
Is Interpretability Sufficient for Safety?6 weeks1d ago6w500.01
Should We Pause AI Development?6 weeks1d ago6w460.01
Why Alignment Might Be EasyQuarterly1d ago13w520.01
Deep Learning Revolution (2012-2020)Quarterly1d ago13w910.01
Mainstream Era (2020-Present)Quarterly1d ago13w470.01
Genetic Enhancement / SelectionQuarterly1d ago13w790.01
Whole Brain Emulation6 weeks1d ago6w470.01
AI Risk Portfolio AnalysisQuarterly1d ago13w470.01
Defense in Depth ModelQuarterly1d ago13w610.01
Relative Longtermist Value ComparisonsQuarterly1d ago13w680.01
Model Organisms of MisalignmentQuarterly1d ago13w730.01
Power-Seeking Emergence Conditions ModelQuarterly1d ago13w730.01
Safety-Capability Tradeoff ModelQuarterly1d ago13w860.01
AI Scaling LawsQuarterly1d ago13w930.01
Centre for Effective Altruism6 weeks1d ago6w420.01
Chan Zuckerberg Initiative6 weeks1d ago6w330.01
Showing 1100 of 539
1 / 6