Why Do AI Researchers Rate the Probability of Doom So Low?
blogAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: LessWrong
A 2022 LessWrong discussion post useful for illustrating the debate over P(Doom) estimates and the epistemic gap between AI safety advocates and mainstream AI researchers, but limited in analytical depth.
Forum Post Details
Metadata
Summary
A LessWrong question post exploring the disconnect between mainstream AI researchers' low P(Doom) estimates (5-10%) and the much higher estimates held by AI safety advocates like Eliezer Yudkowsky. The author shares their own reasoning for high doom probability based on the orthogonality thesis, the ease of building misaligned AI, and the political infeasibility of a global AI ban, while seeking to understand what mainstream researchers know that leads to lower estimates.
Key Points
- •Mainstream AI researchers estimate ~5-10% probability of AI-caused human extinction, while safety-focused researchers like Yudkowsky estimate >50%.
- •Author's case for high P(Doom) rests on orthogonality thesis, misaligned AI being easier to build than aligned AI, and impossibility of global AI ban.
- •Author initially estimated 80% P(Doom), later revised to 20-40% after distinguishing 'human extinction' from 'bad outcome I dislike'.
- •The post highlights the psychological difficulty of committing to explicit probability estimates for catastrophic outcomes.
- •The core question—what do mainstream AI researchers know that lowers their risk estimates—remains largely unanswered in the post itself.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Why Alignment Might Be Hard | Argument | 69.0 |
Cached Content Preview
# Why Do AI researchers Rate the Probability of Doom So Low?
By Aorou
Published: 2022-09-24
I recently read [What do ML researchers think about AI in 2022](https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022).
The probability of Doom is sub-10%. Which is *high*, but as I understand it, in the minds of people like Eliezer Yudkowsky, we're *more likely* doomed than not.
I personally lean towards Yudkowsky's views, because
\- I don't believe human/evolution-selected minds have thinking power that a machine could not have
\- I believe in the Orthogonality Thesis
(I think that those two questions can be defended empirically)
\- I think it is *easier* to make a non-aligned machine than an aligned one
(I believe that research currently being carried out strongly hints at the fact that this is true)
\- I believe that *more people* are working on non-aligned AI than on aligned AI
\- I think it would be very hard politically to stop all AI research and successfully prevent anyone from researching it / to implement a worldwide ban on AI R&D.
Given all this (and probably other observations that I made), I think we're doomed.
I feel my heart beating **hard**, when I think to myself I have to give a number.
I imagine I'm bad at it, it'll be wrong, it's more uncomfortable/inconvenient than just saying "we're fucked" without any number, but here goes anyway-
I'd say that we're
(my brain KEEPS on flinching away from coming up with a number, I don't WANT to *actually* follow through on all my thoughts and observations about the state of AI and what it means for the Future)-
(I think of all the possible Deus-Ex-Machina that could happen)-
(I imagine how terrible it is if I'm WRONG)-
(Visualizing my probabilities for the AI-doom scenario in hypothetical worlds where I don't live makes it easier, I think)
~~My probability of doom from AI is around 80% in the next 50 years.~~
~~(And my probability of Doom~~ *~~if AI keeps getting better~~* ~~is 95% (one reason it might~~ *~~not~~* ~~get better, I imagine, is that~~ *~~another~~* ~~X-Risk happens before AI)).~~
~~I would be surprised if more than 1 world, out of 5 in our current situation, made it out alive from developping AI.~~
*Edit, a week after the post: *
*I'd say my P(Doom) in the next 50 years is now between 20-40%.*
*It's not that I suddenly think AI is less likely, but I think I put my P(Doom) at 80% before because I lumped all of my fears together as if P(Doom) = P(Outcome I really don't like).*
*But those two things are different.*
*For me, P(Doom) = P(humanity wipes out). This is* different *than a bad outcome like \[A few people own all of the AI and everybody else has a terrible life with 0 chance of overthrowing the system\].*
*To be clear, that situation is terrible and I don't want to live there, but it's not **doom**.*
So, my question:
### **What do AI researchers know, or think they know, that
... (truncated, 6 KB total)361870712c6c16e3 | Stable ID: sid_BEkPwDXYV8