Is Power-Seeking AI an Existential Risk?
paperAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Data Status
Abstract
This report examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I proceed in two stages. First, I lay out a backdrop picture that informs such concern. On this picture, intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire -- especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. Second, I formulate and evaluate a more specific six-premise argument that creating agents of this kind will lead to existential catastrophe by 2070.
Cited by 5 pages
| Page | Type | Quality |
|---|---|---|
| AI Acceleration Tradeoff Model | Analysis | 50.0 |
| Carlsmith's Six-Premise Argument | Analysis | 65.0 |
| Instrumental Convergence | Risk | 64.0 |
| Power-Seeking AI | Risk | 67.0 |
| AI Doomer Worldview | Concept | 38.0 |
6e597a4dc1f6f860 | Stable ID: NDU5ODk3NT