Longterm Wiki
Back

"On AGI Ruin: A List of Lethalities"

blog

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
Sharp Left TurnRisk69.0

Cached Content Preview

HTTP 200Fetched Feb 23, 202698 KB
# [Don't Worry About the Vase](https://thezvi.substack.com/)

SubscribeSign in

![User's avatar](https://substackcdn.com/image/fetch/$s_!8FQ8!,w_64,h_64,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4e61e08-4086-4cba-a82c-d31d64270804_48x48.png)

Discover more from Don't Worry About the Vase

A world made of gears. Doing both speed premium short term updates and long term world model building. Currently focused on weekly AI updates. Explorations include AI, policy, rationality, medicine and fertility, education and games.

Over 31,000 subscribers

Subscribe

By subscribing, you agree Substack's [Terms of Use](https://substack.com/tos), and acknowledge its [Information Collection Notice](https://substack.com/ccpa#personal-data-collected) and [Privacy Policy](https://substack.com/privacy).

Already have an account? Sign in

# On AGI Ruin: A List of Lethalities

[![Zvi Mowshowitz's avatar](https://substackcdn.com/image/fetch/$s_!8FQ8!,w_36,h_36,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4e61e08-4086-4cba-a82c-d31d64270804_48x48.png)](https://substack.com/@thezvi)

[Zvi Mowshowitz](https://substack.com/@thezvi)

Jun 13, 2022

12

26

Share

Response to (Eliezer Yudkowsky): [A List of Lethalities](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities).

_Author’s Note: I do not work in AI Safety, lack technical domain knowledge and in many ways am going to be wrong. I wasn’t going to write this to avoid potentially wasting too much time all around without having enough to offer, and for fear of making stupid errors, but it was clear that many people thought my response would be valuable. I thank those whose anonymous sponsorship of this post both paid for my time and made me update that the post was worth writing. I would be happy for this to happen again in the future._

Eliezer has at long last delivered the definitive list of Eliezer Rants About Why AGI Will Definitely Absolutely For Sure Kill Everyone Unless Something Very Unexpected Happens.

This is excellent. In the past we had to make do with makeshift scattershot collections of rants. Now they are all in one place, with a helpful classification system. Key claims are in bold. We can refer, consider and discuss them.

It would be an even better post if it were more logically organized, with dependencies pointed out and mapped and so on.

One could also propose making it not full of rants, but I _don’t_ think that would be an improvement. The rants are _important._ The rants contain _data._ They reveal Eliezer’s cognitive state and his assessment of the state of play. _Not_ ranting would leave important bits out and give a meaningfully misleading impression.

I am reminded of [this comment of mine](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/?commentId=HB3BL3Sa6M

... (truncated, 98 KB total)
Resource ID: 15bb97bb725f1f6a | Stable ID: ZWU1NDYwNT