| Dimension | Description | Key Parameters |
|---|---|---|
| Human Agency Preserved | People retain meaningful autonomy and genuine choice | Human Agency, Preference Authenticity |
| Benefit Distribution | AI gains are shared equitably, not concentrated | AI Control Concentration, Economic Stability |
| Democratic Governance | Legitimate collective decision-making maintained | Institutional Quality, AI Control Concentration |
| Human Purpose/Meaning | People have fulfilling roles, not idle consumption | Human Expertise, Human Agency |
| Epistemic Autonomy | Humans can think independently and form genuine views | Epistemic Health, Reality Coherence |
| Diversity Preserved | Multiple viable ways of life exist | Preference Authenticity, Human Agency |
| Option Value | Future generations can make different choices | Reversibility, Lock-in avoidance |
Long-term Trajectory
- StructureNo tables or diagrams - consider adding visual content
Long-term Trajectory measures the expected quality of the world after the existential catastrophe period resolves—whatever equilibrium or trajectory humanity ends up on. This is about the destination (or ongoing trajectory), distinct from whether we survive to reach it (Existential Catastrophe).
Even if we avoid catastrophe entirely, we could end up in a world where humans lack meaningful agency, AI benefits are concentrated among few, or authentic human preferences are manipulated. A "successful" transition to a dystopia is still a failure.
Why "Long-term Trajectory" not "Steady State"? We don't know whether a stable equilibrium will emerge. The future might involve ongoing change, multiple equilibria, or no clear "steady state" at all. "Long-term Trajectory" captures what we care about without assuming stability.
Sub-dimensions
What Shapes Long-term Trajectory
Scenario Impact Scores
Ultimate Scenarios That Affect This
| Ultimate Scenario | Effect on Long-term Trajectory |
|---|---|
| Long-term Lock-in | Primary — Determines whether good or bad values/power structures persist |
| AI Takeover | Secondary — Successful takeover means AI goals, not human values |
The Root Factor Transition Turbulence also affects Long-term Trajectory through path dependence.
Key Parameters
| Parameter | Relationship | Mechanism |
|---|---|---|
| Epistemics | High → Better | Clear thinking and shared reality enable good choices |
| Governance | High → Better | Effective institutions shape beneficial structures |
| Adaptability | High → Better | Preserved human capacity maintains agency and purpose |
Why This Matters
Long-run conditions are what persist:
- Lock-in effects: Once established, structures are hard to change
- Compounding: Small differences in trajectory compound over time
- Irreversibility: Some futures preclude alternatives permanently
- Values matter: Technical success (avoiding catastrophe) isn't enough if we lose what we value
This outcome dimension asks: "Even if we avoid disaster, will the future be worth living in?"
Key Trade-offs
| Trade-off | Description |
|---|---|
| Safety vs. Agency | Maximum safety might require ceding control to AI, reducing human agency |
| Efficiency vs. Purpose | Optimal AI allocation might leave humans without meaningful roles |
| Coordination vs. Diversity | Global coordination might homogenize cultures and ways of life |
| Speed vs. Deliberation | Faster development might lock in values before we understand implications |
| Stability vs. Option Value | Stable good outcomes might preclude even better alternatives |
Scenarios
| Scenario | Long-term Trajectory | Characteristics |
|---|---|---|
| Flourishing | Very High | Human agency preserved, benefits shared, meaning maintained |
| Comfortable Dystopia | Low | Material abundance but no agency, meaning, or authentic choice |
| Stagnation | Medium | Safety achieved but progress halted, options foreclosed |
| Fragmented | Variable | Some regions flourish, others don't; high inequality |
| Gradual Decline | Declining | No catastrophe but slow erosion of human relevance |
Relationship to Existential Catastrophe
| Existential Catastrophe Outcome | Long-term Trajectory |
|---|---|
| Catastrophe occurs | N/A (no long run) |
| Catastrophe avoided, bad lock-in | Low |
| Catastrophe avoided, good trajectory | High |
Key insight: Existential Catastrophe and Long-term Trajectory are partially independent. You can:
- Avoid catastrophe but end up in a bad future (dystopia)
- Have high existential catastrophe but good conditional outcomes (high-variance)
- Achieve both low risk and high value (best case)
Related Content
- Existential Catastrophe — The other Ultimate Outcome
- Long-term Lock-in — Key Ultimate Scenario for long-term trajectory
Causal Relationships
Auto-generated from the master graph. Shows key relationships.
What links here
- AI Ownershipai-transition-model-factordrives
- Civilizational Competenceai-transition-model-factordrives
- Long-term Lock-inai-transition-model-scenariocontributes-to