Back
Paul Christiano
blogAuthor
paulfchristiano
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: LessWrong
Data Status
Not fetched
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Why Alignment Might Be Hard | Argument | 69.0 |
| Alignment Research Center | Organization | 57.0 |
Cached Content Preview
HTTP 200Fetched Feb 27, 202643 KB

[My views on “doom”](https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom#)
2 min read
•
[Two distinctions](https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom#Two_distinctions)
•
[Other caveats](https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom#Other_caveats)
•
[My best guesses](https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom#My_best_guesses)
[Best of LessWrong 2023](https://www.lesswrong.com/bestoflesswrong?year=2023&category=all)
[Existential risk](https://www.lesswrong.com/w/existential-risk)[Forecasts (Specific Predictions)](https://www.lesswrong.com/w/forecasts-specific-predictions)[AI](https://www.lesswrong.com/w/ai) [Frontpage](https://www.lesswrong.com/posts/5conQhfa4rgb4SaWx/site-guide-personal-blogposts-vs-frontpage-posts)
# 252
# [My views on“doom”](https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom)
by [paulfchristiano](https://www.lesswrong.com/users/paulfchristiano?from=post_header)
27th Apr 2023
[ai-alignment.com](https://ai-alignment.com/)
2 min read
[38](https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom#comments)
# 252
I’m often asked: “what’s the probability of a really bad outcome from AI?”
There are many different versions of that question with different answers. In this post I’ll try to answer a bunch of versions of this question all in one place.
#### Two distinctions
Two distinctions often lead to confusion about what I believe:
- One distinction is between **dying** (“ _extinction_ risk”) and **having a bad future**(“ _existential_ risk”). I think there’s a good chance of bad futures without extinction, e.g. that AI systems take over but don’t kill everyone.
- An important subcategory of “bad future” is “AI takeover:” an outcome where the world is governed by AI systems, and we weren’t able to build AI systems who share our values or care a lot about helping us. This need not result in humans dying, and it may not even be an objectively terrible future. But it does mean that humanity gave up control over its destiny, and I think in expectation it’s pretty bad.
- A second distinction is between **dying now** and **dying later.** I think that there’s a good chance that we don’t die from AI, but that AI and other technologies greatly accelerate the rate of change in the world and so something else kills us shortly later. I wouldn’t call this “from AI” but I do think it happens soon in calendar time and I’m not sure the distinction is comforting to most people.
#### Other caveats
I’ll give my beliefs in terms of probabilities, but these really are just best guesses — the point of numbers is to quantify and communicate what I believe, not to claim I have some kind of calibrated model that
... (truncated, 43 KB total)Resource ID:
ed73cbbe5dec0db9 | Stable ID: NDI1MmEzND