37QualityDraftQuality: 37/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.58.4ImportanceUsefulImportance: 58.4/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.
Summary
A taxonomy/index page organizing structural AI risks into five categories (power concentration, lock-in, competitive dynamics, economic disruption, infrastructure) with brief descriptions and links to sub-pages; no original analysis, quantification, or actionable recommendations.
Content4/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Add entity YAML in data/entities/Edit history2Edit historyTracked changes from improve pipeline runs and manual edits.OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
Tables0/ ~2TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links21/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Ext. links0/ ~2Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:2.5 R:2.5 A:2.5 C:5.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).Backlinks1BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Change History2
Clarify overview pages with new entity type3 weeks ago
Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.
Fix conflicting numeric IDs + add integrity checks#1684 weeks ago
Fixed all 9 overview pages from PR #118 which had numeric IDs (E687-E695) that conflicted with existing YAML entities. Reassigned to E710-E718. Then hardened the system to prevent recurrence:
1. Added page-level numericId conflict detection to `build-data.mjs` (build now fails on conflicts)
2. Created `numeric-id-integrity` global validation rule (cross-page uniqueness, format validation, entity conflict detection)
3. Added `numericId` and `subcategory` to frontmatter Zod schema with format regex
Issues1
StructureNo tables or diagrams - consider adding visual content
Structural Risks (Overview)
Overview
Structural risks are systemic harms that emerge from the interaction of AI systems with economic, political, and social structures. Unlike accident risks (where individual AI systems malfunction) or misuse risks (where humans deliberately weaponize AI), structural risks arise from the aggregate effects of AI deployment on societal organization. Many structural risks involve dynamics that are difficult to reverse once established, making early intervention particularly important.
Power Concentration
Risks from AI enabling or accelerating the concentration of power:
Concentration of PowerRiskAI-Driven Concentration of PowerDocuments how AI development is concentrating in ~20 organizations due to \$100M+ compute costs, with 5 firms controlling 80%+ of cloud infrastructure and projections reaching \$1-10B per model by ...Quality: 65/100: AI capabilities accumulating among a small number of actors, reducing checks and balances
Authoritarian TakeoverRiskAI-Enabled Authoritarian TakeoverComprehensive analysis documenting how 72% of global population (5.7 billion) now lives under autocracy with AI surveillance deployed in 80+ countries, showing 15 consecutive years of declining int...Quality: 61/100: AI enabling authoritarian regimes or actors to seize and maintain power
Winner-Take-All DynamicsRiskAI Winner-Take-All DynamicsComprehensive analysis showing AI's technical characteristics (data network effects, compute requirements, talent concentration) drive extreme concentration, with US attracting \$67.2B investment (...Quality: 54/100: AI markets and capabilities converging toward monopolistic or oligopolistic structures
Compute ConcentrationRiskCompute ConcentrationAll six major AI infrastructure spenders (Amazon, Alphabet, Microsoft, Meta, Oracle, xAI) are US companies subject to CLOUD Act and FISA 702, giving the US government effective legal access to the ...Quality: 70/100: The physical infrastructure of AI concentrating among a few companies, creating structural dependencies
Lock-in and Irreversibility
Risks of AI locking in values, structures, or trajectories that are difficult to change:
Lock-inRiskAI Value Lock-inComprehensive analysis of AI lock-in scenarios where values, systems, or power structures become permanently entrenched. Documents evidence including Big Tech's 66-70% cloud control, AI surveillanc...Quality: 64/100: AI systems or AI-shaped institutions perpetuating current values or power structures indefinitely
IrreversibilityRiskAI-Induced IrreversibilityComprehensive analysis of irreversibility in AI development, distinguishing between decisive catastrophic events and accumulative risks through gradual lock-in. Quantifies current trends (60-70% al...Quality: 64/100: AI-driven changes that cannot be undone, closing off future options for course correction
Competitive and Dynamic Risks
Risks from the competitive dynamics of AI development:
Racing DynamicsRiskAI Development Racing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100: Competitive pressure between labs, companies, or nations leading to reduced safety investment
Multipolar TrapRiskMultipolar Trap (AI Development)Analysis of coordination failures in AI development using game theory, documenting how competitive dynamics between nations (US \$109B vs China \$9.3B investment in 2024 per Stanford HAI 2025) and ...Quality: 91/100: Situations where multiple actors are individually rational to develop AI aggressively, even though collective restraint would be better
ProliferationRiskAI ProliferationAI proliferation accelerated dramatically as the capability gap narrowed from 18 to 6 months (2022-2024), with open-source models like DeepSeek R1 now matching frontier performance. US export contr...Quality: 60/100: Spread of dangerous AI capabilities to actors unable or unwilling to use them safely
Flash DynamicsRiskAI Flash DynamicsAI systems operating at microsecond speeds versus human reaction times of 200-500ms create cascading failure risks across financial markets (2010 Flash Crash: \$1 trillion lost in 10 minutes), infr...Quality: 64/100: AI-enabled rapid cascading events that outpace human response capacity
Economic and Social Disruption
Risks from AI's impact on economic and social systems:
Economic DisruptionRiskAI-Driven Economic DisruptionComprehensive survey of AI labor displacement evidence showing 40-60% of jobs in advanced economies exposed to automation, with IMF warning of inequality worsening in most scenarios and 13% early-c...Quality: 42/100: Large-scale labor market disruption and economic restructuring from AI automation
EnfeeblementRiskAI-Induced EnfeeblementDocuments the gradual risk of humanity losing critical capabilities through AI dependency. Key findings: GPS users show 23% navigation decline (Nature 2020), AI writes 46% of code with 4x more clon...Quality: 91/100: Humans becoming increasingly dependent on AI systems, losing the capability to function without them
Erosion of Human AgencyRiskErosion of Human AgencyComprehensive analysis of AI-driven agency erosion across domains: 42.3% of EU workers under algorithmic management (EWCS 2024), 70%+ of Americans consuming news via social media algorithms, and do...Quality: 91/100: Gradual loss of meaningful human choice as AI systems optimize environments and decisions
AI Welfare and Digital Minds: Emerging questions about moral consideration for AI systems
Infrastructure Risks
Risks from the physical and digital infrastructure of AI:
Concentrated Compute as a Cybersecurity Risk: The concentration of AI infrastructure creating novel attack surfaces
Compute ConcentrationRiskCompute ConcentrationAll six major AI infrastructure spenders (Amazon, Alphabet, Microsoft, Meta, Oracle, xAI) are US companies subject to CLOUD Act and FISA 702, giving the US government effective legal access to the ...Quality: 70/100: Physical infrastructure concentration creating systemic dependencies and vulnerabilities
Key Dynamics
Gradual onset: Many structural risks develop incrementally rather than through sudden events, making them harder to recognize and respond to until they are deeply entrenched.
Interaction effects: Structural risks interact with each other in reinforcing ways. Power concentrationRiskAI-Driven Concentration of PowerDocuments how AI development is concentrating in ~20 organizations due to \$100M+ compute costs, with 5 firms controlling 80%+ of cloud infrastructure and projections reaching \$1-10B per model by ...Quality: 65/100 enables lock-inRiskAI Value Lock-inComprehensive analysis of AI lock-in scenarios where values, systems, or power structures become permanently entrenched. Documents evidence including Big Tech's 66-70% cloud control, AI surveillanc...Quality: 64/100; racing dynamicsRiskAI Development Racing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100 increase proliferationRiskAI ProliferationAI proliferation accelerated dramatically as the capability gap narrowed from 18 to 6 months (2022-2024), with open-source models like DeepSeek R1 now matching frontier performance. US export contr...Quality: 60/100; economic disruptionRiskAI-Driven Economic DisruptionComprehensive survey of AI labor displacement evidence showing 40-60% of jobs in advanced economies exposed to automation, with IMF warning of inequality worsening in most scenarios and 13% early-c...Quality: 42/100 can accelerate authoritarian takeoverRiskAI-Enabled Authoritarian TakeoverComprehensive analysis documenting how 72% of global population (5.7 billion) now lives under autocracy with AI surveillance deployed in 80+ countries, showing 15 consecutive years of declining int...Quality: 61/100.
Governance-sensitive: Structural risks are particularly sensitive to governance choices. Effective AI governanceCruxAI Governance and PolicyComprehensive analysis of AI governance mechanisms estimating 30-50% probability of meaningful regulation by 2027 and 5-25% x-risk reduction potential through coordinated international approaches. ...Quality: 66/100 can mitigate many structural risks, while governance failures can exacerbate them.
AI Governance and PolicyCruxAI Governance and PolicyComprehensive analysis of AI governance mechanisms estimating 30-50% probability of meaningful regulation by 2027 and 5-25% x-risk reduction potential through coordinated international approaches. ...Quality: 66/100
Risks
AI-Induced IrreversibilityRiskAI-Induced IrreversibilityComprehensive analysis of irreversibility in AI development, distinguishing between decisive catastrophic events and accumulative risks through gradual lock-in. Quantifies current trends (60-70% al...Quality: 64/100AI Flash DynamicsRiskAI Flash DynamicsAI systems operating at microsecond speeds versus human reaction times of 200-500ms create cascading failure risks across financial markets (2010 Flash Crash: \$1 trillion lost in 10 minutes), infr...Quality: 64/100AI-Driven Concentration of PowerRiskAI-Driven Concentration of PowerDocuments how AI development is concentrating in ~20 organizations due to \$100M+ compute costs, with 5 firms controlling 80%+ of cloud infrastructure and projections reaching \$1-10B per model by ...Quality: 65/100AI-Enabled Authoritarian TakeoverRiskAI-Enabled Authoritarian TakeoverComprehensive analysis documenting how 72% of global population (5.7 billion) now lives under autocracy with AI surveillance deployed in 80+ countries, showing 15 consecutive years of declining int...Quality: 61/100AI ProliferationRiskAI ProliferationAI proliferation accelerated dramatically as the capability gap narrowed from 18 to 6 months (2022-2024), with open-source models like DeepSeek R1 now matching frontier performance. US export contr...Quality: 60/100AI Value Lock-inRiskAI Value Lock-inComprehensive analysis of AI lock-in scenarios where values, systems, or power structures become permanently entrenched. Documents evidence including Big Tech's 66-70% cloud control, AI surveillanc...Quality: 64/100