Skip to content

Existential Catastrophe

📋Page Status
Page Type:AI Transition ModelStyle Guide →Structured factor/scenario/parameter page
Quality:0 (Stub)
Importance:0 (Peripheral)
Backlinks:4
Structure:
📊 0📈 0🔗 0📚 00%Score: 2/15
LLM Summary:This page contains only a React component placeholder with no actual content visible for evaluation. The component would need to render content dynamically for assessment.
Issues (1):
  • StructureNo tables or diagrams - consider adding visual content

Existential Catastrophe measures the probability and potential severity of catastrophic AI-related events. This is about the tail risks—the scenarios we most urgently want to avoid because they could cause irreversible harm at civilizational scale.

Unlike Transition Smoothness (which concerns the journey) or Steady State Quality (which concerns the destination), Existential Catastrophe is about avoiding catastrophe entirely. A world with high existential catastrophe might navigate a smooth transition to a good steady state—or might not make it there at all.

Sub-dimensions

DimensionDescriptionKey Parameters
Loss of ControlAI systems pursuing goals misaligned with humanity; inability to correct or shut down advanced systemsAlignment Robustness, Human Oversight Quality
Misuse CatastropheDeliberate weaponization of AI for mass harm—bioweapons, autonomous weapons, critical infrastructure attacksBiological Threat Exposure, Cyber Threat Exposure
Accident at ScaleUnintended large-scale harms from deployed systems; cascading failures across interconnected AISafety-Capability Gap, Safety Culture Strength
Lock-in RiskIrreversible commitment to bad values, goals, or power structuresAI Control Concentration, Institutional Quality
Concentration CatastropheSingle actor gains decisive AI advantage and uses it harmfullyAI Control Concentration, Racing Intensity

What Contributes to Existential Catastrophe

Scenario Impact Scores

Primary Contributing Aggregates

AggregateRelationshipMechanism
Misalignment Potential↓↓↓ Decreases riskAligned, interpretable, overseen systems are less likely to cause catastrophe
Misuse Potential↑↑↑ Increases riskHigher bio/cyber exposure, concentration, and racing all elevate existential catastrophe
Civilizational Competence↓↓ Decreases riskEffective governance can slow racing, enforce safety standards, coordinate responses

Key Individual Parameters

ParameterEffectStrength
Alignment Robustness↓ Reduces↓↓↓ Critical
Safety-Capability Gap↑ Increases↑↑↑ Critical
Racing Intensity↑ Increases↑↑↑ Strong
Human Oversight Quality↓ Reduces↓↓ Strong
Interpretability Coverage↓ Reduces↓↓ Strong
AI Control Concentration↑/↓ Depends↑↑ Context-dependent
Biological Threat Exposure↑ Increases↑↑ Direct
Cyber Threat Exposure↑ Increases↑↑ Direct

Why This Matters

Existential catastrophe is the most time-sensitive outcome dimension:

  • Irreversibility: Many catastrophic scenarios cannot be undone
  • Path dependence: High existential catastrophe can foreclose good steady states entirely
  • Limited recovery: Unlike transition disruption, catastrophe may preclude recovery
  • Urgency: Near-term capability advances increase near-term existential catastrophe

This is why much AI safety work focuses on existential catastrophe reduction—it's the outcome where failure is most permanent.

Related Outcomes

Causal Relationships

Auto-generated from the master graph. Shows key relationships.

Expand
Computing layout...
Legend
Node Types
Leaf Nodes
Causes
This Factor
Effects
Arrow Strength
Strong
Medium
Weak