Longterm Wiki

Pause / Moratorium

proposedInternational

Proposals to pause or slow frontier AI development until safety is better understood, offering potentially high safety benefits if implemented but facing significant coordination challenges and currently lacking adoption by major AI laboratories.

Related Topics

Related Pages

Top Related Pages

Organizations

AnthropicOpenAIxAI

Risks

Multipolar Trap (AI Development)AI Trust Cascade Failure

Approaches

Multi-Agent SafetyCorporate AI Safety Responses

Analysis

Multipolar Trap Dynamics ModelAnthropic Impact Assessment ModelRacing Dynamics Game Theory ModelAI Lab Incentives Model

Other

Elon Musk

Concepts

Alignment Policy Overview

Key Debates

AI Safety Solution CruxesAI Structural Risk CruxesShould We Pause AI Development?

Policy

International Compute Regimes

Quick Facts

Status
Proposed; no implementation
Scope
International

Tags

moratoriumdevelopment-pausecoordinationprecautionary-principleracing-dynamics