Longterm Wiki

Rogue AI Scenarios

AccidentCatastrophic

Analysis of five lean scenarios for agentic AI takeover-by-accident—sandbox escape, training signal corruption, correlated policy failure, delegation chain collapse, and emergent self-preservation—each evaluated for warning shot likelihood and mapped against current deployment patterns. None require superhuman intelligence, explicit deception, or rich self-awareness.

Severity
Catastrophic
Likelihood
Medium (emerging)
Time Horizon
2026–2040 (median 2032)
Maturity
Emerging

Full Wiki Article

Read the full wiki article for detailed analysis, background, and references.

Read wiki article →

Related Entities7

Sources1

Concrete scenarios for agentic AI takeover

Assessment

SeverityCatastrophic
LikelihoodMedium (emerging)
Time Horizon2026–2040 (median 2032)
MaturityEmerging
CategoryAccident

Details

Scenario Count5 minimal-assumption pathways
Key InsightNone require superhuman intelligence or explicit deception

Tags

agentic-aiinstrumental-convergencewarning-shotssandboxingdelegation

Quick Links