Back
Pausing AI Development Isn't Enough. We Need to Shut it All Down
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: TIME
Data Status
Not fetched
Cited by 4 pages
| Page | Type | Quality |
|---|---|---|
| Should We Pause AI Development? | Crux | 47.0 |
| Machine Intelligence Research Institute | Organization | 50.0 |
| Eliezer Yudkowsky: Track Record | -- | 61.0 |
| AI Doomer Worldview | Concept | 38.0 |
Cached Content Preview
HTTP 200Fetched Feb 23, 202614 KB
- [Ideas](https://time.com/section/ideas)
- [Technology](https://time.com/tag/technology)
# Pausing AI Developments Isn’t Enough. We Need to Shut it All Down
[ADD TIME ON GOOGLE](https://www.google.com/preferences/source?q=https://time.com)
Show me more content from TIME on Google Search
Mar 29, 2023 6:01 PM ET

Illustration for TIME by Lon Tweeten
by
[Eliezer Yudkowsky](https://time.com/author/eliezer-yudkowsky/)
Mar 29, 2023 6:01 PM ET
An [open letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.
I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
**Read More:** [_AI Labs Urged to Pump the Brakes in Open Letter_](https://time.com/6266679/musk-ai-open-letter/)
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
Many researchers steeped in these [issues](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities), including myself, [expect](https://www.lesswrong.com/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results) that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
Advertisement
* * *
### More from TIME
* * *
Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that _could in principle_ be imbued into an AI but _we are not ready_ and _do not currently know how._
Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
The likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include “a 10-year-old trying to pla
... (truncated, 14 KB total)Resource ID:
d0c81bbfe41efe44 | Stable ID: OWI2ZmNjMG