Back
EA Forum surveys
webAuthor
bmg
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: EA Forum
Data Status
Not fetched
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Why Alignment Might Be Hard | Argument | 69.0 |
| Eliezer Yudkowsky: Track Record | -- | 61.0 |
Cached Content Preview
HTTP 200Fetched Feb 23, 202698 KB
RejectAccept all
Hide table of contents
# [On Deference and Yudkowsky's AI RiskEstimates](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates)
by [bmg](https://forum.effectivealtruism.org/users/bmg?from=post_header)
Jun 19 202220 min read194
# 291
[AI safety](https://forum.effectivealtruism.org/topics/ai-safety)[Building effective altruism](https://forum.effectivealtruism.org/topics/building-effective-altruism)[Forecasting](https://forum.effectivealtruism.org/topics/forecasting)[Epistemic deference](https://forum.effectivealtruism.org/topics/epistemic-deference)[AI alignment](https://forum.effectivealtruism.org/topics/ai-alignment)[AI forecasting](https://forum.effectivealtruism.org/topics/ai-forecasting)[Eliezer Yudkowsky](https://forum.effectivealtruism.org/topics/eliezer-yudkowsky)[Risk assessment](https://forum.effectivealtruism.org/topics/risk-assessment)[Criticism and Red Teaming Contest](https://forum.effectivealtruism.org/topics/criticism-and-red-teaming-contest)[Criticism of work in effective altruism](https://forum.effectivealtruism.org/topics/criticism-of-work-in-effective-altruism) [Frontpage](https://forum.effectivealtruism.org/about#Finding_content)
Show all topics
[On Deference and Yudkowsky's AI Risk Estimates](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#)
[Introduction](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#Introduction)
[Why write this post?](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#Why_write_this_post_)
[Yudkowsky’s track record: some cherry-picked examples](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#Yudkowsky_s_track_record__some_cherry_picked_examples)
[Fairly clearcut examples](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#Fairly_clearcut_examples)
[1\. Predicting near-term extinction from nanotech](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#1__Predicting_near_term_extinction_from_nanotech)
[2\. Predicting that his team had a substantial chance of building AGI before 2010](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#2__Predicting_that_his_team_had_a_substantial_chance_of_building_AGI_before_2010)
[Somewhat disputable examples](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#Somewhat_disputable_examples)
[3\. Having high confidence that AI progress would be extremely discontinuous and localized and not require much compute](https://forum.effectivealtruism.org/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates#3__Having_high_confidence_th
... (truncated, 98 KB total)Resource ID:
e1fe34e189cc4c55 | Stable ID: MTJhNzNhOG