Longterm Wiki
Back

darioamodei.com

web

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
Anthropic-Pentagon Standoff (2026)Event70.0

Cached Content Preview

HTTP 200Fetched Feb 26, 202698 KB
# Machines of Loving Grace

# Machines of Loving Grace[1](https://darioamodei.com/essay/machines-of-loving-grace\#fn:1)  1 [https://allpoetry.com/All-Watched-Over-By-Machines-Of-Loving-Grace](https://allpoetry.com/All-Watched-Over-By-Machines-Of-Loving-Grace)

How AI Could Transform the World for the Better

October 2024

I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. **I think that most people are underestimating just how radical the upside of AI could be**, just as I think most people are underestimating how bad the risks could be.

In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes _right_. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one.

First, however, I wanted to briefly explain why I and Anthropic haven’t talked that much about powerful AI’s upsides, and why we’ll probably continue, overall, to talk a lot about risks. In particular, I’ve made this choice out of a desire to:

- **Maximize leverage**. The basic development of AI technology and many (not all) of its benefits seems inevitable (unless the risks derail everything) and is fundamentally driven by powerful market forces. On the other hand, the risks are not predetermined and our actions can greatly change their likelihood.
- **Avoid perception of propaganda**. AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they’re attempting to distract from downsides. I also think that as a matter of principle it’s bad for your soul to spend too much of your time “talking your book”.
- **Avoid grandiosity**. I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation. I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms.
- **Avoid “sci-fi” baggage**. Although I think most people underestimate the upside of powerful AI

... (truncated, 98 KB total)
Resource ID: c22c010f61c21b96 | Stable ID: DMWUTByTQw