Longterm Wiki

MAIM (Mutually Assured AI Malfunction)

A deterrence framework proposed by Dan Hendrycks, Eric Schmidt, and Alexandr Wang in their 2025 paper 'Superintelligence Strategy'. MAIM posits that rival states will naturally deter each other from pursuing unilateral AI dominance because destabilizing AI projects can be sabotaged through an escalation ladder from espionage to kinetic strikes. Part of a three-pillar framework including nonproliferation and competitiveness.

Related Legislation

Related Topics

Related Pages

Top Related Pages

Organizations

Center for AI SafetyMachine Intelligence Research InstituteSafe Superintelligence Inc.

Risks

Cyberweapons RiskCompute ConcentrationTreacherous TurnMultipolar Trap (AI Development)

Approaches

Pause Advocacy

Analysis

Authoritarian Tools Diffusion ModelAI Safety Multi-Actor Strategic Landscape

Other

Dan HendrycksNick BostromIlya Sutskever

Policy

Hardware-Enabled GovernanceChina AI Regulatory Framework

Historical

Anthropic-Pentagon Standoff (2026)The MIRI Era

Concepts

SuperintelligenceSelf-Improvement and Recursive Enhancement

Quick Facts

Sources

Tags

ai-deterrenceinternational-governancegeopoliticssuperintelligencenational-security