Skip to content

Future Projections

This section explores different scenarios for how AI development might unfold. Scenario analysis helps prepare for multiple futures rather than betting on a single prediction.

AI development succeeds with safety:

  • Alignment research proves sufficient
  • Safety measures are implemented effectively
  • Humanity retains meaningful control
  • Probability estimate: 15-35%

Misaligned AI causes major harm:

  • Alignment fails despite effort
  • Deceptive or power-seeking AI emerges
  • Ranges from setback to existential catastrophe
  • Probability estimate: 5-25%

Development is deliberately slowed:

  • Government intervention or lab coordination
  • More time bought for safety research
  • Possible through regulation or crisis response
  • Probability estimate: 10-25%

Gradual change without clear transition:

  • No single “AGI moment”
  • Ongoing adaptation to incremental progress
  • Neither utopia nor catastrophe
  • Probability estimate: 30-50%

Multiple powerful AI systems compete:

  • No single winner-take-all
  • International competition dynamics
  • Complex coordination challenges
  • Probability estimate: 25-45%

Scenarios help:

  • Test robustness - Which interventions help across scenarios?
  • Identify early signs - What would indicate we’re heading toward each?
  • Prepare contingencies - What should we do if each materializes?
  • Communicate - Make abstract risks concrete

The scenario we end up in depends heavily on:

  • AGI timelines (sooner favors speed, later favors preparation)
  • Alignment difficulty (harder favors pessimistic scenarios)
  • Coordination success (better coordination enables pause/redirect)
  • First-mover dynamics (monopoly vs. multipolar outcomes)