Longterm Wiki
Navigation
Updated 2026-03-13HistoryData
Page StatusContent
Edited today173 words
Content2/13
LLM summaryScheduleEntityEdit historyOverview
Tables0/ ~1Diagrams0Int. links11/ ~3Ext. links0/ ~1Footnotes0/ ~2References0/ ~1Quotes0Accuracy0
Issues1
StructureNo tables or diagrams - consider adding visual content

Key Debates

Overview

This section presents structured analyses of major debates in AI safety. For each debate, we present the strongest arguments on multiple sides, identify key cruxes, and track how expert opinion has evolved.

Major Debates

Foundational Questions

  • Is AI Existential Risk Real? - The core question of whether AI poses genuine existential risk
  • AGI Timeline Debate - When might transformative AI arrive?
  • Scaling Debate - Will scaling alone produce AGI?

Alignment Approach

  • Why Alignment Might Be Hard - Arguments for difficulty
  • Why Alignment Might Be Easy - Arguments for tractability
  • Interpretability Sufficient? - Can we rely on understanding models?

Strategy & Policy

  • Pause Debate - Should AI development be paused?
  • Open vs Closed - Open-source AI tradeoffs
  • Regulation Debate - Government intervention approaches

Risk Assessment

  • The Case FOR AI Existential Risk - Strongest arguments for concern
  • The Case AGAINST AI Existential Risk - Strongest counterarguments

How Debates Are Structured

Each debate page includes:

  • Steelmanned positions - Best versions of each argument
  • Key cruxes - What would change minds
  • Evidence and citations - Supporting data for each side
  • Expert distribution - Where informed people fall
  • Implications - How resolution would affect priorities

Related Pages

Top Related Pages

Key Debates

Why Alignment Might Be EasyIs Interpretability Sufficient for Safety?Should We Pause AI Development?Is Scaling All You Need?When Will AGI Arrive?Is AI Existential Risk Real?