Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusRisk
Edited today432 words1 backlinks
37QualityDraft58.4ImportanceUseful
Summary

A taxonomy/index page organizing structural AI risks into five categories (power concentration, lock-in, competitive dynamics, economic disruption, infrastructure) with brief descriptions and links to sub-pages; no original analysis, quantification, or actionable recommendations.

Content4/13
LLM summaryScheduleEntityEdit history2Overview
Tables0/ ~2Diagrams0Int. links21/ ~3Ext. links0/ ~2Footnotes0/ ~2References0/ ~1Quotes0Accuracy0RatingsN:2.5 R:2.5 A:2.5 C:5.5Backlinks1
Change History2
Clarify overview pages with new entity type3 weeks ago

Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.

Fix conflicting numeric IDs + add integrity checks#1684 weeks ago

Fixed all 9 overview pages from PR #118 which had numeric IDs (E687-E695) that conflicted with existing YAML entities. Reassigned to E710-E718. Then hardened the system to prevent recurrence: 1. Added page-level numericId conflict detection to `build-data.mjs` (build now fails on conflicts) 2. Created `numeric-id-integrity` global validation rule (cross-page uniqueness, format validation, entity conflict detection) 3. Added `numericId` and `subcategory` to frontmatter Zod schema with format regex

Issues1
StructureNo tables or diagrams - consider adding visual content

Structural Risks (Overview)

Overview

Structural risks are systemic harms that emerge from the interaction of AI systems with economic, political, and social structures. Unlike accident risks (where individual AI systems malfunction) or misuse risks (where humans deliberately weaponize AI), structural risks arise from the aggregate effects of AI deployment on societal organization. Many structural risks involve dynamics that are difficult to reverse once established, making early intervention particularly important.

Power Concentration

Risks from AI enabling or accelerating the concentration of power:

  • Concentration of Power: AI capabilities accumulating among a small number of actors, reducing checks and balances
  • Authoritarian Takeover: AI enabling authoritarian regimes or actors to seize and maintain power
  • Winner-Take-All Dynamics: AI markets and capabilities converging toward monopolistic or oligopolistic structures
  • Compute Concentration: The physical infrastructure of AI concentrating among a few companies, creating structural dependencies

Lock-in and Irreversibility

Risks of AI locking in values, structures, or trajectories that are difficult to change:

  • Lock-in: AI systems or AI-shaped institutions perpetuating current values or power structures indefinitely
  • Irreversibility: AI-driven changes that cannot be undone, closing off future options for course correction

Competitive and Dynamic Risks

Risks from the competitive dynamics of AI development:

  • Racing Dynamics: Competitive pressure between labs, companies, or nations leading to reduced safety investment
  • Multipolar Trap: Situations where multiple actors are individually rational to develop AI aggressively, even though collective restraint would be better
  • Proliferation: Spread of dangerous AI capabilities to actors unable or unwilling to use them safely
  • Flash Dynamics: AI-enabled rapid cascading events that outpace human response capacity

Economic and Social Disruption

Risks from AI's impact on economic and social systems:

  • Economic Disruption: Large-scale labor market disruption and economic restructuring from AI automation
  • Enfeeblement: Humans becoming increasingly dependent on AI systems, losing the capability to function without them
  • Erosion of Human Agency: Gradual loss of meaningful human choice as AI systems optimize environments and decisions
  • AI Welfare and Digital Minds: Emerging questions about moral consideration for AI systems

Infrastructure Risks

Risks from the physical and digital infrastructure of AI:

  • Concentrated Compute as a Cybersecurity Risk: The concentration of AI infrastructure creating novel attack surfaces
  • Compute Concentration: Physical infrastructure concentration creating systemic dependencies and vulnerabilities

Key Dynamics

Gradual onset: Many structural risks develop incrementally rather than through sudden events, making them harder to recognize and respond to until they are deeply entrenched.

Interaction effects: Structural risks interact with each other in reinforcing ways. Power concentration enables lock-in; racing dynamics increase proliferation; economic disruption can accelerate authoritarian takeover.

Governance-sensitive: Structural risks are particularly sensitive to governance choices. Effective AI governance can mitigate many structural risks, while governance failures can exacerbate them.

Related Pages

Top Related Pages

Key Debates

AI Governance and Policy

Risks

AI-Induced IrreversibilityAI Flash DynamicsAI-Driven Concentration of PowerAI-Enabled Authoritarian TakeoverAI ProliferationAI Value Lock-in