Skip to content

Worldviews

People working on AI safety hold diverse worldviews that lead to different risk assessments and priorities. Understanding these worldviews helps explain disagreements and enables more productive dialogue.

Believes AI existential risk is very high (often >50% p(doom)):

  • Alignment is fundamentally hard
  • Current approaches are inadequate
  • We may not get many chances to get it right
  • Often associated with: MIRI, Eliezer Yudkowsky, some AI safety researchers

Believes transformative AI is further away (20+ years):

  • More time to solve alignment
  • Current risks are more speculative
  • Near-term concerns deserve more attention
  • Associated with: Some ML researchers, AI ethics community

Prioritizes policy and institutional solutions:

  • Technical alignment is necessary but insufficient
  • Racing dynamics are the key problem
  • International coordination is critical
  • Associated with: GovAI, policy researchers, some EA organizations

Believes alignment is likely to succeed:

  • Current research is making real progress
  • Markets and institutions create safety incentives
  • Past technology fears were often overblown
  • Associated with: Some lab researchers, techno-optimists
WorldviewTechnical ResearchGovernanceTimelinesKey Intervention
DoomerCritical but may be hopelessValuableShortPause/slow down
Long TimelinesImportant, time availableModerate priorityLongGradual progress
Governance FocusedNecessaryHighest priorityMediumCoordination
OptimisticOn trackLight touchMediumContinue current

Explicitly identifying worldview assumptions helps:

  • Understand why experts disagree
  • Identify which cruxes would change conclusions
  • Design robustly valuable interventions
  • Have more productive conversations