Agent Foundations
Agent foundations research (MIRI's mathematical frameworks for aligned agency) faces low tractability after 10+ years with core problems unsolved, leading to MIRI's 2024 strategic pivot away from the field. Assessment shows ~15-25% probability the work is essential, 60-75% confidence in low tractabi
Related Pages
Top Related Pages
Long-Timelines Technical Worldview
The long-timelines worldview (20-40+ years to AGI) argues for foundational research over rushed solutions based on historical AI overoptimism, curr...
Machine Intelligence Research Institute (MIRI)
A pioneering AI safety research organization that shifted from technical alignment research to policy advocacy, founded by Eliezer Yudkowsky in 200...
AI Doomer Worldview
Short timelines, hard alignment, high risk.
Optimistic Alignment Worldview
The optimistic alignment worldview holds that AI safety is solvable through engineering and iteration.
AI Alignment Research Agendas
Analysis of major AI safety research agendas comparing approaches from Anthropic (\\$100M+ annual safety budget, 37-39% team growth), DeepMind (30-...