Skip to content

Risk Trajectory Experiments

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:20 (Draft)
Importance:8.5 (Peripheral)
Words:353
Structure:
📊 0📈 0🔗 0📚 044%Score: 2/15
LLM Summary:Experimental data visualization interface for AI Transition Model showing how catastrophe risk and lock-in severity evolve over four timeline phases (2025-2045+) through multiple pathways and root factors. No quantitative values, evidence, or methodological details provided—purely interface documentation.
Issues (1):
  • StructureNo tables or diagrams - consider adding visual content

These experiments visualize AI Transition Model risk trajectories—showing how catastrophe risk and lock-in severity evolve from present day through TAI and beyond.


Risk trajectories from present to 2060
Cumulative Catastrophe Risk
0%10%20%30%40%50%TAI20252031203720452060
Lock-in Severity Index
020406080100TAI20252031203720452060
Catastrophe Pathways
AI Takeover (Rapid)
AI Takeover (Gradual)
Human Catastrophe (State)
Human Catastrophe (Rogue)
Lock-in Types
Economic Lock-in
Political Lock-in
Epistemic Lock-in
Values Lock-in
Suffering Lock-in

The core idea: risk compounds over time, with different pathways contributing differently at different phases.

The AI Transition Model identifies:

Catastrophe Pathways:

  1. AI Takeover (Rapid) - Fast recursive self-improvement leading to misaligned superintelligence
  2. AI Takeover (Gradual) - Slow erosion of human control
  3. Human Catastrophe (State Actor) - Great power conflict enabled by AI
  4. Human Catastrophe (Rogue Actor) - Non-state actors using AI for mass harm

Lock-in Types:

  1. Economic - Irreversible wealth/power concentration
  2. Political - Authoritarian control solidified by AI
  3. Epistemic - Information environment permanently degraded
  4. Values - Human values shaped/locked by AI systems
  5. Suffering - Persistent negative states (e.g., digital minds)

Root Factors (driving both outcomes):

  • Misalignment Potential
  • Misuse Potential
  • AI Capabilities
  • AI Ownership concentration
  • Civilizational Competence
  • Transition Turbulence

Side-by-side stacked area charts showing catastrophe risk (left) and lock-in severity (right):

Risk trajectories from present to 2060
Cumulative Catastrophe Risk
0%10%20%30%40%50%TAI20252031203720452060
Lock-in Severity Index
020406080100TAI20252031203720452060
Catastrophe Pathways
AI Takeover (Rapid)
AI Takeover (Gradual)
Human Catastrophe (State)
Human Catastrophe (Rogue)
Lock-in Types
Economic Lock-in
Political Lock-in
Epistemic Lock-in
Values Lock-in
Suffering Lock-in

Shows how each root factor contributes to each outcome type:

Factor contributions over time — sparklines show trajectory from 2025 to 2060
MisalignmentPotentialMisusePotentialAICapabilitiesAIOwnershipCivilizationalCompetenceTransitionTurbulenceCatastrophe RiskLock-in Severity2947%1927%2235%1115%813%88%1423%814%1827%2241%1322%1115%

Current levels of each root factor with trend indicators:

Current root factor levels (affects both outcomes)
Misalignment Potential
36%
Misuse Potential
66%
AI Capabilities
46%
AI Ownership
77%
Civilizational Competence
44%
Transition Turbulence
71%

Individual pathway trajectories with confidence bands:

Individual pathway trajectories with uncertainty bands
0%10%20%30%40%50%202520282031203420372040204520502060Probability (%)AI Takeover (Rapid)AI Takeover (Gradual)Human Catastrophe (State)Human Catastrophe (Rogue)Total Risk

Combined view with all components:

Risk trajectories from present to 2060
Cumulative Catastrophe Risk
0%10%20%30%40%50%TAI20252031203720452060
Lock-in Severity Index
020406080100TAI20252031203720452060
Catastrophe Pathways
AI Takeover (Rapid)
AI Takeover (Gradual)
Human Catastrophe (State)
Human Catastrophe (Rogue)
Lock-in Types
Economic Lock-in
Political Lock-in
Epistemic Lock-in
Values Lock-in
Suffering Lock-in
Factor contributions over time — sparklines show trajectory from 2025 to 2060
MisalignmentPotentialMisusePotentialAICapabilitiesAIOwnershipCivilizationalCompetenceTransitionTurbulenceCatastrophe RiskLock-in Severity2947%1927%2235%1115%813%88%1423%814%1827%2241%1322%1115%
Current root factor levels (affects both outcomes)
Misalignment Potential
36%
Misuse Potential
66%
AI Capabilities
46%
AI Ownership
77%
Civilizational Competence
44%
Transition Turbulence
71%

Timeline phases:

  • Current (2025-2030): Pre-TAI baseline
  • Near-TAI (2031-2036): Approaching transformative AI
  • TAI (2037-2044): Transformative AI arrival
  • Post-TAI (2045+): Stabilization or continued turbulence

Color encoding:

  • Warm colors (red/orange) = Catastrophe pathways
  • Cool colors (blue/purple/pink) = Lock-in types
  • Factor-specific colors for attribution

Key features:

  • TAI marker line showing expected transition point
  • Hover interactions for detailed values
  • Toggle between pathway and factor views
  • Confidence bands on trajectory lines

What these visualizations convey:

  1. Risk accumulates non-linearly, with TAI as an inflection point
  2. Different pathways dominate at different times
  3. Lock-in may be the larger long-term concern even if catastrophe is avoided
  4. Root factors have differential impact on different outcomes