Skip to content
Longterm Wiki
Back

Author

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Alignment Forum

A widely-cited and debated 2022 post by Eliezer Yudkowsky representing the strongest public statement of his doom thesis; essential reading for understanding the pessimistic wing of AI safety discourse and the arguments that motivate MIRI's research priorities.

Metadata

Importance: 90/100blog postprimary source

Summary

Eliezer Yudkowsky's comprehensive argument for why AGI development is likely to result in human extinction, presented as a list of distinct failure modes and reasons why alignment is extremely difficult. The post systematically addresses why standard proposed solutions are insufficient and why the default outcome of unaligned AGI is catastrophic. It serves as a canonical statement of Yudkowsky's pessimistic position on humanity's ability to navigate the AGI transition safely.

Key Points

  • Lists dozens of independent 'lethalities'—reasons why AGI development leads to doom even if individual problems seem solvable, emphasizing cumulative difficulty.
  • Argues that outer alignment, inner alignment, and interpretability are each individually insufficient and collectively still likely to fail under real AGI development conditions.
  • Contends that current ML paradigms produce systems whose internals are opaque, making verification of alignment nearly impossible before deployment at dangerous capability levels.
  • Challenges optimistic views that iterative deployment, scaling feedback, or governance can compensate for fundamental alignment uncertainty.
  • Represents Yudkowsky's explicit claim that without major breakthroughs in alignment theory, AGI timelines imply near-certain catastrophe regardless of developer intent.

Cited by 2 pages

PageTypeQuality
Sharp Left TurnRisk69.0
AI Doomer WorldviewConcept38.0

Cached Content Preview

HTTP 200Fetched Apr 7, 202698 KB
Oct
 NOV
 Dec
 

 
 

 
 12
 
 

 
 

 2024
 2025
 2026
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Common Crawl

 

 

 Web crawl data from Common Crawl.
 

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20251112111245/https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

 

 This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. 

1964

AI ALIGNMENT FORUM

AF

Login

1963

AGI Ruin: A List of Lethalities — AI Alignment Forum

2022 MIRI Alignment Discussion

AI RiskThreat Models (AI)Double-CruxFuzziesLanguage Models (LLMs)Meetups & Local Communities (topic)AI
Curated

147

spotify-podcast-badge-wht-blk-165x40Created with Sketch.

AGI Ruin: A List of Lethalities

by Eliezer Yudkowsky

5th Jun 2022

36 min read

711

147

AI RiskThreat Models (AI)Double-CruxFuzziesLanguage Models (LLMs)Meetups & Local Communities (topic)AI
Curated

Previous:

Six Dimensions of Operational Adequacy in AGI Projects

33 comments317 karma

Next:

A central AI alignment problem: capabilities generalization, and the sharp left turn

18 comments276 karma

Log in to save where you left off

AGI Ruin: A List of Lethalities
77evhub

13Eliezer Yudkowsky

74evhub

34Eliezer Yudkowsky

12Alexander Gietelink Oldenziel

10Vaniver

2Eliezer Yudkowsky

5James Payor

7James Payor

36Matthew Barnett

13Vaniver

5Buck

12Daniel Kokotajlo

7leogao

29Rob Bensinger

0The Dao of Bayes

7Rob Bensinger

27Vanessa Kosoy

4Rob Bensinger

9Vanessa Kosoy

4Rob Bensinger

3Vanessa Kosoy

1Rob Bensinger

1Vanessa Kosoy

25Wei Dai

25Steven Byrnes

12CarlShulman

24John Schulman

11Vaniver

3John Schulman

3Vaniver

9Eliezer Yudkowsky

8John Schulman

15Vaniver

23TurnTrout

22Daniel Kokotajlo

5Rob Bensinger

-12Eliezer Yudkowsky

5Rafael Harth

3Rob Bensinger

22johnswentworth

2Ruby

15johnswentworth

14evhub

4Eliezer Yudkowsky

4evhub

12Steven Byrnes

0The Dao of Bayes

0Rob Bensinger

0The Dao of Bayes

3Rob Bensinger

21ESRogs

11Steven Byrnes

5Rob Bensinger

3James Payor

3James Payor

20romeostevensit

3Ben Pace

16Raemon

15Andrew_Critch

11Vika

11habryka

3Eliezer Yudkowsky

10Richard_Ngo

6Eliezer Yudkowsky

4Richard_Ngo

3Eliezer Yudkowsky

3Richard_Ngo

8Eliezer Yudkowsky

9p.b.

0Rob Bensinger

9p.b.

6Eliezer Yudkowsky

6TurnTrout

3Rob Bensinger

6TurnTrout

6Rob Bensinger

4TurnTrout

2Vaniver

2TurnTrout

2lc

2TurnTrout

1lc

2TurnTrout

1lc

3TurnTrout

2FireStormOOO

2TurnTrout

8William_S

5Rob Bensinger

7Raemon

6Evan R. Murphy

6David Scott Krueger (formerly: capybaralet)

2Rob Bensinger

1David Scott Krueger (formerly: capybaralet)

6Vaniver

6DaemonicSigil

8Ramana Kumar

9Eliezer Yudkowsky

31DaemonicSigil

19Eliezer Yudkowsky

2DaemonicSigil

1Edouard Harris

6AlphaAndOmega

1sullyj3

19Rob Bensinger

... (truncated, 98 KB total)
Resource ID: 0aea2d39b8284ab1 | Stable ID: sid_WAssQPQdLi