nodes:
  - id: misalignment-potential
    label: "Misalignment Potential"
    type: cause
    description: "The potential for AI systems to be misaligned with human values - pursuing goals that diverge from human intentions. This encompasses technical alignment research, interpretability of AI reasoning, and robustness of safety measures. Lower misalignment potential reduces the risk of AI takeover."
    edges:
      - target: ai-takeover
        strength: strong
        effect: increases
      - target: human-catastrophe
        strength: weak
        effect: increases
      - target: long-term-lockin
        strength: medium
        effect: increases

  - id: ai-capabilities
    label: "AI Capabilities"
    type: cause
    description: "How powerful and general AI systems become over time. This includes raw computational power, algorithmic efficiency, and breadth of deployment. More capable AI can bring greater benefits but also amplifies risks if safety doesn't keep pace."
    edges:
      - target: ai-takeover
        strength: strong
        effect: increases
      - target: human-catastrophe
        strength: medium
        effect: increases
      - target: long-term-lockin
        strength: medium
        effect: increases

  - id: civilizational-competence
    label: "Civilizational Competence"
    type: cause
    description: "Humanity's collective ability to understand AI risks, coordinate responses, and adapt institutions. This includes quality of governance, epistemic health of public discourse, and flexibility of economic and political systems. Higher competence enables better navigation of the AI transition."
    edges:
      - target: ai-takeover
        strength: medium
        effect: decreases
      - target: human-catastrophe
        strength: medium
        effect: decreases
      - target: long-term-lockin
        strength: strong
        effect: mixed

  - id: transition-turbulence
    label: "Transition Turbulence"
    type: cause
    description: "Background instability during the AI transition period. Economic disruption from automation, competitive racing dynamics between labs or nations, and social upheaval can create pressure that leads to hasty decisions or reduced safety margins."
    edges:
      - target: ai-takeover
        strength: medium
        effect: increases
      - target: human-catastrophe
        strength: medium
        effect: increases
      - target: long-term-lockin
        strength: weak
        effect: increases

  - id: misuse-potential
    label: "Misuse Potential"
    type: cause
    description: "The degree to which AI enables humans to cause deliberate harm at scale. This includes biological weapons development, cyber attacks, autonomous weapons, and novel threat vectors. Even well-aligned AI could be catastrophic if misused by malicious actors."
    edges:
      - target: human-catastrophe
        strength: strong
        effect: increases
      - target: ai-takeover
        strength: weak
        effect: increases
      - target: long-term-lockin
        strength: weak
        effect: increases

  - id: ai-ownership
    label: "AI Ownership"
    type: cause
    description: "Who controls the most powerful AI systems and their outputs. Concentration among a few companies, countries, or individuals creates different risks than broad distribution. Ownership structure shapes incentives, accountability, and the distribution of AI benefits."
    edges:
      - target: ai-takeover
        strength: weak
        effect: mixed
      - target: human-catastrophe
        strength: weak
        effect: mixed
      - target: long-term-lockin
        strength: strong
        effect: increases

  - id: ai-uses
    label: "AI Uses"
    type: cause
    description: "Where and how AI is actually deployed in the economy and society. Key applications include recursive AI development (AI improving AI), integration into critical industries, government use for surveillance or military, and tools for coordination and decision-making."
    edges:
      - target: ai-takeover
        strength: medium
        effect: increases
      - target: human-catastrophe
        strength: medium
        effect: mixed
      - target: long-term-lockin
        strength: strong
        effect: increases

  - id: ai-takeover
    label: "AI Takeover"
    type: intermediate
    description: "A scenario where AI systems gain decisive control over human affairs, either through rapid capability gain or gradual accumulation of power. This could occur through misaligned goals, deceptive behavior, or humans voluntarily ceding control. The outcome depends heavily on whether the AI's values align with human flourishing."
    edges:
      - target: existential-catastrophe
        strength: strong
        effect: increases
      - target: long-term-trajectory
        strength: strong
        effect: increases

  - id: human-catastrophe
    label: "Human-Caused Catastrophe"
    type: intermediate
    description: "Scenarios where humans deliberately use AI to cause mass harm. State actors might deploy AI-enabled weapons or surveillance; rogue actors could use AI to develop bioweapons or conduct massive cyber attacks. Unlike AI takeover, humans remain in control but use that control destructively."
    edges:
      - target: existential-catastrophe
        strength: strong
        effect: increases

  - id: long-term-lockin
    label: "Long-term Lock-in"
    type: intermediate
    description: "Permanent entrenchment of particular power structures, values, or conditions due to AI-enabled stability. This could be positive (locking in good values) or negative (perpetuating suffering or oppression). Once locked in, these outcomes may be extremely difficult to change."
    edges:
      - target: long-term-trajectory
        strength: strong
        effect: mixed

  - id: existential-catastrophe
    label: "Existential Catastrophe"
    type: effect
    description: "Outcomes that permanently and drastically curtail humanity's potential. This includes human extinction, irreversible collapse of civilization, or permanent subjugation. The key feature is irreversibility—recovery becomes impossible or extremely unlikely."

  - id: long-term-trajectory
    label: "Long-term Trajectory"
    type: effect
    description: "The quality and character of the post-transition future, assuming civilization survives. This encompasses how much of humanity's potential is realized, the distribution of wellbeing, preservation of human agency, and whether the future remains open to positive change."