Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusContent
Edited today286 words
Content3/13
LLM summaryScheduleEntityEdit historyOverview
Tables1/ ~1Diagrams0Int. links8/ ~3Ext. links0/ ~1Footnotes0/ ~2References0/ ~1Quotes0Accuracy0

Worldviews

Overview

People working on AI safety hold diverse worldviews that lead to different risk assessments and priorities. Understanding these worldviews helps explain disagreements and enables more productive dialogue.

Major Worldviews

Doomer

Believes AI existential risk is very high (often >50% p(doom)):

  • Alignment is fundamentally hard
  • Current approaches are inadequate
  • We may not get many chances to get it right
  • Often associated with: MIRI, Eliezer Yudkowsky, some AI safety researchers

Long Timelines

Believes transformative AI is further away (20+ years):

  • More time to solve alignment
  • Current risks are more speculative
  • Near-term concerns deserve more attention
  • Associated with: Some ML researchers, AI ethics community

Governance Focused

Prioritizes policy and institutional solutions:

  • Technical alignment is necessary but insufficient
  • Racing dynamics are the key problem
  • International coordination is critical
  • Associated with: GovAI, policy researchers, some EA organizations

Optimistic

Believes alignment is likely to succeed:

  • Current research is making real progress
  • Markets and institutions create safety incentives
  • Past technology fears were often overblown
  • Associated with: Some lab researchers, techno-optimists

How Worldviews Affect Priorities

WorldviewTechnical ResearchGovernanceTimelinesKey Intervention
DoomerCritical but may be hopelessValuableShortPause/slow down
Long TimelinesImportant, time availableModerate priorityLongGradual progress
Governance FocusedNecessaryHighest priorityMediumCoordination
OptimisticOn trackLight touchMediumContinue current

Using Worldview Analysis

Explicitly identifying worldview assumptions helps:

  • Understand why experts disagree
  • Identify which cruxes would change conclusions
  • Design robustly valuable interventions
  • Have more productive conversations

Related Pages

Top Related Pages

Organizations

GovAI

Other

Eliezer Yudkowsky

Concepts

AI Doomer Worldview

Analysis

Worldview-Intervention Mapping