Skip to content

Structural Risk Cruxes

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:66 (Good)
Importance:74.5 (High)
Last edited:2025-12-28 (5 weeks ago)
Words:1.9k
Structure:
📊 7📈 1🔗 34📚 04%Score: 10/15
LLM Summary:Analyzes 12 key uncertainties about AI structural risks across power concentration, coordination feasibility, and institutional adaptation. Provides quantified probability ranges: US-China coordination 15-50%, winner-take-all dynamics 30-45%, racing dynamics manageable at 35-45%, finding that crux positions determine whether to prioritize governance interventions versus technical safety work.
Critical Insights (8):
  • Counterint.AI racing dynamics are considered manageable by governance mechanisms (35-45% probability) rather than inevitable, despite visible competitive pressures and limited current coordination success.S:4.0I:4.5A:4.0
  • Quant.US-China AI coordination shows 15-50% probability of success according to expert assessments, with narrow technical cooperation (35-50% likely) more feasible than comprehensive governance regimes, despite broader geopolitical competition.S:4.0I:4.5A:3.5
  • Quant.Winner-take-all dynamics in AI development are assessed as 30-45% likely, with current evidence showing extreme concentration where training costs reach $170 million (Llama 3.1) and top 3 cloud providers control 65-70% of AI market share.S:3.5I:4.5A:4.0
DimensionAssessmentEvidence
Research MaturityEarly-stageLimited empirical studies; most analysis theoretical
Expert ConsensusLowWide disagreement on whether structural risks are a distinct category
Resolution Timeline5-15 yearsMany cruxes require observing AI deployment at scale
Policy RelevanceHighDetermines priority between governance vs. technical interventions
QuantifiabilityLimitedMost probability estimates are subjective expert judgments
Intervention WindowsNarrowingMarket concentration and international dynamics evolving rapidly
Key Evidence GapEmpirical data on AI market structure evolution and institutional adaptation speed

Structural risks from AI—including power concentration, lock-in of values or institutions, and breakdown of human agency—represent some of the most consequential yet uncertain challenges posed by advanced artificial intelligence. Unlike traditional AI safety risks focused on specific system failures, structural risks concern how AI transforms the fundamental architecture of human civilization. Your position on key uncertainties, or “cruxes,” in this domain largely determines whether you view these risks as urgent priorities requiring immediate governance interventions, or as speculative concerns that shouldn’t distract from more concrete technical safety work.

These cruxes are particularly important because they operate at different levels of abstraction and timescales. Some concern foundational questions about whether structural risks constitute a meaningful analytical category distinct from accident and misuse risks. Others focus on near-term competitive dynamics between AI developers and nations. Still others examine long-term questions about technological lock-in and human agency that may unfold over decades. The positions you take on these uncertainties collectively determine your overall structural risk worldview and corresponding intervention priorities.

Given the conceptual fuzziness inherent in structural risk analysis, these cruxes are themselves more speculative than those in other AI safety domains. Many lack clear empirical resolution criteria and involve complex interactions between technological capabilities, social dynamics, and institutional responses. Nevertheless, they represent the key decision points that separate different approaches to understanding and addressing AI’s systemic implications for human civilization.

Loading diagram...

This decision tree illustrates how positions on foundational cruxes cascade into different strategic priorities. The percentages represent rough probability ranges for each position based on expert elicitation.


🔑Key CruxFoundations
Critical

Are structural risks genuinely distinct from accident/misuse risks?

Whether 'structural risks' names real phenomena that require separate analysis, or is just a different level of abstraction on the same underlying risks.

Resolvability: 2-10 years
Status: Debated; no consensus on category boundaries

Key Positions

Structural risks are genuinely distinct40-55%
Held by: GovAI, Some longtermists
Need structural interventions (governance, coordination); technical safety alone insufficient
Useful framing but substantially overlapping30-40%
Use structural lens for some problems; don't treat as separate research agenda
Mostly aggregation of other risks; not a useful category15-25%
Held by: Some AI safety researchers
Focus on technical safety and misuse prevention; structural framing obscures more than clarifies

Would Update On

  • Theoretical analysis of category boundaries
  • Cases where structural vs individual framing leads to different interventions
  • Evidence that structural dynamics have independent causal power

This foundational crux shapes the entire field’s approach to AI safety prioritization. Those who view structural risks as genuinely distinct argue that AI’s effects on power concentration, institutional stability, and human agency operate through different causal mechanisms than individual system failures. They point to examples like algorithmic bias in hiring creating systematic inequality, or AI-enabled surveillance transforming state-citizen relationships—phenomena that emerge from the aggregate deployment of AI systems rather than specific malfunctions. This position suggests structural interventions like governance frameworks, coordination mechanisms, and institutional reforms are necessary complements to technical safety work.

Alternatively, researchers who view structural risks as primarily an aggregation of individual risks argue that focusing on preventing accidents and misuse will naturally address structural concerns. They contend that “structural risk” often conflates correlation with causation, attributing to AI what may simply reflect broader technological and social trends. This perspective suggests that the structural framing may obscure more concrete intervention points and dilute resources from proven technical safety approaches.

Recent research provides quantitative evidence on AI’s power-concentrating effects:

MetricValueSourceYear
Top 3 cloud providers’ AI market share65-70%Korinek & Vipra2024
US private AI investment$109 billionStanford AI Index2024
China private AI investment$9.3 billionStanford AI Index2024
Cost to train Llama 3.1 (405B)≈$170 millionStanford AI Index2024
Microsoft investment in OpenAIgreater than $13 billionCRS2024
Companies with models exceeding GPT-414Korinek & Vipra2024
Workers needing AI reskilling by 2030greater than 60%World Economic Forum2025

In July 2024, the DOJ, FTC, UK CMA, and European Commission released a joint statement specifying three competition concerns: concentrated control of key inputs (chips, compute, talent), incumbent digital firms extending power into AI markets, and arrangements among key players reducing competition.

🔑Key CruxFoundations
Critical

Does AI concentrate power more than previous technologies?

Whether AI is qualitatively different in its power-concentrating effects, or is following historical patterns of technological change.

Resolvability: 2-10 years
Status: Unclear; AI is early-stage; historical comparisons contested

Key Positions

AI is qualitatively different in concentration effects35-50%
Held by: Some AI governance researchers, AI Now Institute
Urgent need for antitrust, redistribution, democratic governance of AI
AI continues historical pattern; not qualitatively new30-40%
Held by: Some economists, Tech optimists
Apply existing regulatory frameworks; don't overreact to AI-specific concentration
AI may actually distribute power (open source, democratization)15-25%
Held by: Some open source advocates
Support open development; concentration concerns are overstated

Would Update On

  • Empirical data on AI industry concentration trends
  • Historical analysis of technology and power concentration
  • Evidence on open source AI capability vs closed labs
  • Data on AI's effects on labor market concentration

Evidence for AI’s distinctive power-concentrating effects includes its scalability without proportional resource increases, network effects where data advantages compound, and first-mover advantages in setting industry standards. Current AI development shows extreme concentration among a handful of companies with the computational resources for frontier model training—a pattern that may be more pronounced than previous technologies. The transformative nature of general intelligence could amplify these effects beyond historical precedent.

However, historical analysis reveals that many transformative technologies initially appeared to concentrate power dramatically before competitive forces and regulatory responses distributed benefits more widely. The printing press, telegraph, and internet all raised similar concerns about information control and market concentration. Some economists argue that AI follows familiar patterns of innovation diffusion, where initial concentration gives way to broader adoption as costs decrease and capabilities standardize.


🔑Key CruxCompetition & Coordination
Critical

Are AI racing dynamics inevitable given competitive pressures?

Whether competitive pressures (commercial, geopolitical) make unsafe racing dynamics unavoidable, or if coordination can prevent races.

Resolvability: 2-10 years
Status: Racing dynamics visible; some voluntary coordination attempts

Key Positions

Racing is largely inevitable; coordination will fail30-45%
Held by: Some game theorists, Realists
Focus on making racing safer; assume coordination fails; technical solutions paramount
Racing can be managed with the right mechanisms35-45%
Held by: GovAI, Some policy researchers
Invest heavily in coordination mechanisms; compute governance; international agreements
Racing dynamics are overstated; labs can coordinate15-25%
Held by: Some industry observers
Support voluntary coordination; racing narrative may be self-fulfilling

Would Update On

  • Success or failure of lab coordination (RSPs, etc.)
  • International coordination outcomes
  • Evidence from other domains on coordination under competitive pressure
  • Game-theoretic analysis with realistic assumptions

Current evidence shows clear competitive pressures driving rapid AI development with limited safety coordination. Major labs regularly announce accelerated timelines and capability breakthroughs in apparent response to competitors. The hundreds of billions invested in AI development, combined with first-mover advantages in key markets, creates strong incentives to prioritize speed over safety measures. Geopolitically, the framing of AI as a national security priority further intensifies racing dynamics between the US and China.

Those who believe racing can be managed point to successful coordination in other high-stakes domains, including nuclear weapons control, climate agreements, and financial regulation. They argue that shared recognition of catastrophic risks can overcome competitive pressures when appropriate mechanisms exist. Recent initiatives like responsible scaling policies (RSPs) and voluntary commitments on frontier AI safety represent early attempts at such coordination. However, skeptics note that these voluntary measures lack enforcement mechanisms and may not hold under severe competitive pressure.

🔑Key CruxCompetition & Coordination
High

Can meaningful AI coordination be achieved without external enforcement?

Whether voluntary coordination among AI developers can work, or if binding regulation/enforcement is required.

Resolvability: 2-10 years
Status: Voluntary commitments exist (RSPs); limited enforcement; competitive pressures strong

Key Positions

Voluntary coordination can work with right incentives20-35%
Held by: Some lab leadership
Support voluntary standards; build trust; avoid heavy regulation that might backfire
Coordination requires external enforcement40-55%
Held by: Most governance researchers
Focus on regulation, auditing, liability; don't rely on voluntary commitments
Neither voluntary nor regulatory coordination will work15-25%
Focus on technical solutions; prepare for uncoordinated development; defensive measures

Would Update On

  • Track record of RSPs and voluntary commitments
  • Regulatory enforcement attempts and outcomes
  • Evidence of labs defecting from commitments under pressure
  • Successful coordination in analogous domains

Early evidence on voluntary coordination shows mixed results. Anthropic, OpenAI, and other major labs have adopted responsible scaling policies and participated in safety commitments, demonstrating some willingness to coordinate. However, these commitments remain largely aspirational, with limited transparency about implementation and no binding enforcement mechanisms. The recent acceleration in capability announcements and deployment timelines suggests competitive pressures may be overwhelming voluntary restraint.

Industry observers note that successful voluntary coordination often requires repeated interaction, shared norms, and credible monitoring—conditions that may be difficult to maintain in a rapidly evolving field with high stakes. Financial sector coordination during crises provides some positive precedents, but typically involved regulatory backstops and shared crisis recognition. The challenge for AI coordination is achieving cooperation before crises demonstrate the need for restraint.

🔑Key CruxCompetition & Coordination
Critical

Can US-China AI coordination succeed despite geopolitical competition?

Whether major AI powers can coordinate on safety/governance despite strategic rivalry.

Resolvability: 2-10 years
Status: Very limited coordination; competition dominant; some backchannel communication

Key Positions

Meaningful coordination is achievable15-30%
Held by: Some diplomats, Track II participants
Invest heavily in diplomatic channels; find areas of shared interest; build on bio/nuclear precedent
Narrow coordination on specific risks possible35-50%
Focus on achievable goals (bioweapons prevention, accident hotlines); don't expect comprehensive regime
Great power competition precludes coordination25-40%
Held by: Realists, Some national security analysts
Focus on domestic/allied governance; defensive measures; prepare for fragmented development

Would Update On

  • US-China AI dialogue outcomes
  • Coordination success on specific risks
  • Broader geopolitical relationship changes
  • Precedents from other technology domains

The current US-China relationship on AI combines strategic competition with limited cooperation on specific issues. While broader technology export controls and investment restrictions reflect deep mistrust, both countries have participated in international AI governance forums and expressed concern about catastrophic risks. The November 2023 Biden-Xi summit produced modest commitments to AI risk dialogue, though follow-through remains limited.

Historical precedents suggest both possibilities and constraints. Nuclear arms control succeeded despite Cold War tensions, demonstrating that existential risks can motivate cooperation even between adversaries. However, those agreements emerged after decades of crisis and near-misses that demonstrated mutual vulnerability. AI cooperation may require similar crisis recognition, which could come too late to prevent harmful racing dynamics.

DateEventSignificance
Nov 2023Biden-Xi Woodside SummitFirst agreement to discuss AI governance risks
Mar 2024UN resolution on safe AI (US-led)China supported US-led resolution; 193 member support
May 2024Geneva bilateral meetingFirst US-China meeting specifically on AI governance
Jun 2024UN resolution on AI capacity-building (China-led)US supported China-led resolution; 120+ members
Nov 2024Biden-Xi APEC meetingAgreement to avoid AI control of nuclear weapons
Feb 2025Paris AI Action SummitCalled for harmonized global standards; showed framework gaps
Jul 2025China’s Global AI Governance Action PlanChina proposes international AI cooperation organization

Despite these diplomatic milestones, fundamental tensions persist. The US ties AI exports to political alignment through chip export controls, while China promotes “open cooperation with fewer conditions.” Former Google CEO Eric Schmidt has called for explicit US-China collaboration, stating both nations have “a vested interest to keep the world stable” and ensure “human control of these tools.”


🔑Key CruxPower Dynamics
High

Will AI development produce winner-take-all dynamics?

Whether AI advantages compound to produce extreme concentration, or if competition will persist.

Resolvability: 2-10 years
Status: Some concentration visible; unclear if winner-take-all

Key Positions

Winner-take-all is likely in frontier AI30-45%
Held by: Some AI researchers, Critics of Big Tech
Urgent antitrust action needed; support for alternatives; public AI development
Oligopoly more likely than monopoly35-45%
Manage concentration but don't expect single winner; focus on maintaining competition
Competition will persist; open source prevents lock-in20-30%
Held by: Open source advocates
Support open development; market will self-correct; concentration fears overstated

Would Update On

  • Frontier AI market structure evolution
  • Open source capability vs closed labs over time
  • Evidence on returns to scale in AI
  • Regulatory intervention effects

Current evidence shows significant concentration in frontier AI capabilities among a small number of well-resourced companies, driven by advantages in computing resources, data access, and talent acquisition. The enormous costs of training state-of-the-art models—potentially reaching hundreds of millions or billions of dollars—create substantial barriers to entry. Network effects and data advantages may further compound these inequalities, as successful AI systems generate user data that improves performance.

However, the trajectory toward winner-take-all outcomes remains uncertain. Open-source AI development has produced capable models like Llama and others that approach frontier performance at lower costs. Regulatory intervention could limit concentration through antitrust enforcement or mandatory sharing requirements. Historical precedent suggests that even technologies with strong network effects often settle into competitive oligopolies rather than pure monopolies.

🔑Key CruxPower Dynamics
High

Would AI-enabled lock-in be reversible?

Whether structures/values locked in via AI could later be changed, or if lock-in would be permanent.

Resolvability: 10+ years
Status: Speculative; no lock-in has occurred yet

Key Positions

AI lock-in would be effectively permanent20-35%
Held by: Some longtermists, Ord/MacAskill
Preventing lock-in is extremely high priority; current values matter enormously
Lock-in would be very hard but not impossible to reverse35-45%
Lock-in prevention important but not absolute; build reversibility into systems
Lock-in is unlikely; systems are more fragile than we think25-35%
Held by: Some historians
Don't overweight lock-in concerns; focus on nearer-term risks

Would Update On

  • Historical analysis of technological lock-in
  • Analysis of AI's effect on change difficulty
  • Evidence on value evolution in stable systems
  • Theoretical analysis of lock-in mechanisms

The permanence of potential AI-enabled lock-in depends on several factors that remain highly uncertain. Advanced AI systems could theoretically enable unprecedented surveillance and control capabilities, making coordination for change extremely difficult. If AI development concentrated among a small number of actors, they might gain sufficient leverage to preserve favorable arrangements indefinitely. The speed and scale of AI deployment could create path dependencies that become increasingly difficult to reverse.

However, historical analysis suggests that even seemingly permanent institutional arrangements eventually face challenges from technological change, generational shifts, or external pressures. The Soviet system appeared locked-in for decades before rapid collapse. Economic and technological evolution continues to create new possibilities for social organization. The question may be not whether AI-enabled lock-in would be reversible, but whether it would persist long enough to significantly constrain human development.

Recent research has identified specific mechanisms through which AI could enable value lock-in:

MechanismDescriptionConcern Level
Technical ArchitectureAI systems can maintain unchangeable values through designHigh
Deceptive Alignment2024 research showed Claude 3 Opus sometimes strategically answered prompts to avoid retrainingHigh
Alignment FakingAI systems may create false impressions of alignment to avoid modificationMedium-High
Institutional EntrenchmentAI-enabled surveillance and control capabilities could make coordination for change extremely difficultMedium
Economic Path DependencyWinner-take-all dynamics may entrench early value choicesMedium

The Forethought Foundation’s analysis notes that AGI could make it “technologically feasible to perfectly preserve nuanced specifications of a wide variety of values or goals far into the future”—potentially for “millions, and plausibly trillions, of years.” The World Economic Forum’s 2024 white paper on AI Value Alignment explores how to guide AI systems toward shared human values while preserving adaptability.

🔑Key CruxPower Dynamics
Medium

Is there a risk of premature values crystallization?

Whether AI could lock in current values before humanity has developed sufficient moral wisdom.

Resolvability: 10+ years
Status: Theoretical concern; no near-term crystallization mechanism

Key Positions

Premature crystallization is a serious risk25-40%
Held by: Ord, MacAskill
Prioritize moral uncertainty; avoid embedding specific values; build for value evolution
Values will continue evolving regardless of AI35-45%
Less urgent; focus on present values; trust future adaptation
Can't avoid embedding values; should embed best current ones20-30%
Focus on getting values right now; crystallization may be unavoidable

Would Update On

  • Analysis of how AI might crystallize values
  • Historical study of value evolution mechanisms
  • Research on moral progress drivers

Concerns about premature values crystallization reflect the observation that AI systems necessarily embed particular values and assumptions in their design and training. If these systems become sufficiently powerful and widespread, they might entrench current moral frameworks before humanity has time to develop greater moral wisdom through experience and reflection. Historical examples of moral progress—such as expanding circles of moral consideration or evolving concepts of justice—suggest that continued value evolution is important for human flourishing.

Critics argue that values crystallization concerns may be overblown, pointing to the continued evolution of values even in stable societies with established institutions. They note that AI systems can be updated and retrained as values evolve, and that competitive pressures may favor systems aligned with evolving social preferences. The challenge lies in distinguishing between values that should be preserved and those that should remain open to evolution.


🔑Key CruxHuman Agency
High

Will AI assistance cause human agency/capability atrophy?

Whether humans will lose critical skills and decision-making capacity through AI dependency.

Resolvability: 2-10 years
Status: Early evidence from automation; AI assistance much newer

Key Positions

Significant atrophy is likely without countermeasures40-55%
Held by: Nicholas Carr, Some human factors researchers
Mandate skill maintenance; design AI to preserve human capability; accept efficiency loss
Some atrophy; critical skills can be preserved30-40%
Identify and protect critical skills; let others atrophy; targeted intervention
New skills emerge; net positive transformation15-25%
Held by: Tech optimists
Focus on developing new skills; don't fight inevitable transitions

Would Update On

  • Longitudinal studies on AI use and skill retention
  • Evidence from domains with long AI assistance history
  • Successful skill preservation programs
  • Analysis of what skills are actually needed

Evidence from aviation automation provides concerning precedents for skill atrophy concerns. Pilots who rely heavily on autopilot systems show measurable deterioration in manual flying skills, contributing to accidents when automation fails and human intervention is required. Similar patterns appear in navigation (GPS dependency), calculation (calculator reliance), and memory (smartphone externalization). The concern is that widespread AI assistance could create systemic vulnerability if humans lose capacity for independent judgment and action.

However, automation also demonstrates that humans can maintain critical skills through deliberate practice and appropriate system design. Airlines mandate manual flying requirements and emergency procedures training. Medical professionals maintain diagnostic skills despite decision support systems. The key question is whether society will proactively identify and preserve essential human capabilities, or allow market pressures to optimize for short-term efficiency at the expense of long-term resilience.

Quantitative Evidence on AI-Induced Skill Atrophy

Section titled “Quantitative Evidence on AI-Induced Skill Atrophy”
FindingSourceImplication
39% of existing skills will be transformed or outdated by 2030World Economic ForumMassive reskilling need
55,000 US job cuts directly attributed to AI in 2025Industry reportsEntry-level positions most affected
greater than 60% of workforce needing reskillingWEF 2025Institutional adaptation required
Hiring slowed for entry-level programmers and analystsMcKinseyAI performing tasks once used for training

A 2024 paper titled “The Paradox of Augmentation: A Theoretical Model of AI-Induced Skill Atrophy” directly addresses the concern that skills erode as humans rely on AI augmentation. Research published in New Biotechnology (2025) by Holzinger et al. examines challenges of human oversight in complex AI systems, noting that “as AI systems grow increasingly complex, opaque, and autonomous, ensuring responsible use becomes a formidable challenge.”

🔑Key CruxHuman Agency
Critical

Can meaningful human oversight of advanced AI be maintained?

Whether humans can maintain genuine oversight as AI systems become more capable and complex.

Resolvability: 2-10 years
Status: Current oversight limited; scaling unclear

Key Positions

Meaningful oversight is achievable with investment30-45%
Held by: Anthropic, Some AI safety researchers
Invest heavily in interpretability, evaluation, oversight tools
Oversight will become increasingly formal/shallow35-45%
Design for robustness to shallow oversight; accept limitations; build redundancy
Genuine oversight of advanced AI is not possible15-25%
Held by: Some AI pessimists
Don't build systems that require human oversight; fundamentally different approach needed

Would Update On

  • Progress in interpretability research
  • Evidence on human ability to oversee complex systems
  • Development of oversight tools and their effectiveness
  • Empirical studies on oversight quality as systems scale

Current human oversight of AI systems often resembles “security theater”—superficial review procedures that provide reassurance without meaningful control. Large language models operate as black boxes even to their creators, making genuine oversight extremely challenging. As systems become more capable and operate faster than human cognition, maintaining meaningful human involvement becomes increasingly difficult.

Research in interpretability and AI evaluation offers some hope for maintaining oversight through better tools and methodologies. Techniques like mechanistic interpretability, constitutional AI, and automated evaluation could potentially scale human oversight capabilities. However, this requires significant investment and may lag behind capability development. The fundamental challenge is that truly advanced AI systems may operate in ways that exceed human comprehension, making oversight qualitatively different from previous technologies.


🔑Key CruxSystemic Dynamics
High

Can social/institutional adaptation keep pace with AI change?

Whether human institutions can adapt quickly enough to manage AI-driven changes.

Resolvability: 2-10 years
Status: AI changing faster than regulation; some adaptation occurring

Key Positions

Adaptation will fall dangerously behind35-50%
Held by: Many AI governance researchers
Need to slow AI; build adaptive institutions; prepare for governance gaps
Adaptation will lag but manage35-45%
Focus on building adaptability; accept some lag; don't panic
Institutions can adapt adequately15-25%
Held by: Some optimists
Trust existing institutions; incremental reform sufficient

Would Update On

  • Speed of regulatory adaptation vs AI development
  • Historical comparison to other fast-changing technologies
  • Evidence on institutional flexibility
  • Success of adaptive governance experiments

The current pace of AI development clearly outpaces institutional adaptation. Regulatory frameworks lag years behind technological capabilities, with agencies struggling to understand systems that evolve monthly. Traditional policy-making processes involving extensive consultation, analysis, and legislative approval are poorly suited to rapidly changing technologies. The result is a governance gap where powerful AI systems operate with minimal oversight or accountability.

However, institutions have demonstrated adaptability to other technological disruptions. Financial regulators responded to digital trading, privacy laws evolved to address internet technologies, and safety standards adapted to new transportation methods. The question is whether AI’s pace and breadth of impact exceeds institutional adaptation capacity, or whether new governance approaches can bridge the gap. Experiments in adaptive regulation, regulatory sandboxes, and anticipatory governance offer potential models but remain largely untested at scale.

Two contrasting models have emerged for AI governance institutions:

ApproachExampleAdvantagesChallenges
Adapt existing bodiesChina’s Cyberspace AdministrationExisting authority and expertiseMay lack AI-specific knowledge
Create specialized institutionsSpain’s AESIA, UK AI Safety InstituteFocused expertiseLimited authority, resources
Regulatory sandboxesUK FCA fintech sandboxEnables experimentationDifficult to scale
Anticipatory governanceSingapore Model AI Governance FrameworkProactive; flexibleRequires technical foresight

Key 2024-2025 developments include:

  • May 2024: Council of Europe adopted first international AI treaty on human rights and democracy
  • 2024: UN established High-Level Advisory Body on AI
  • 2024: Seoul Summit produced voluntary Frontier AI Safety Commitments from 16 major AI companies
  • 2024: Federal AI Risk Management Act mandated NIST AI Risk Management Framework for US agencies
🔑Key CruxSystemic Dynamics
Medium

Do AI interaction speeds create fundamentally new risks?

Whether AI systems interacting faster than human reaction time creates qualitatively new dangers.

Resolvability: 2-10 years
Status: Some fast AI interactions (trading); broader dynamics unclear

Key Positions

Speed creates qualitatively new systemic risks30-45%
Held by: Some financial stability researchers
Build circuit breakers; require human checkpoints; slow down critical systems
Speed is a factor but manageable35-45%
Design for fast failure recovery; accept some speed; targeted interventions
Speed concerns are overstated20-30%
Don't sacrifice capability for speed limits; focus on other risks

Would Update On

  • Analysis of flash crash dynamics
  • Evidence from high-speed AI system interactions
  • Research on human oversight of fast systems
  • Incidents involving AI speed

Financial markets provide clear examples of how AI speed can create systemic risks. Flash crashes driven by algorithmic trading have caused market disruptions within milliseconds, too fast for human intervention. These events demonstrate how AI systems interacting at superhuman speeds can create cascading failures that exceed traditional risk management capabilities.

As AI systems become more prevalent across critical infrastructure, similar dynamics could emerge in power grids, transportation networks, or communication systems. The concern is not just individual system failures, but emergent behaviors from AI systems interacting faster than human operators can monitor or control. However, the same speed that creates risks also enables rapid response systems and fail-safes that could mitigate dangers more effectively than human-speed systems.


The structural risks landscape presents both concerning and promising developments. On the concerning side, current trends show accelerating AI capabilities development with limited coordination between major players, increasing concentration of power among a few well-resourced organizations, and institutional adaptation lagging significantly behind technological change. The competitive dynamics between the US and China have intensified rather than leading to cooperation, while voluntary coordination mechanisms remain largely untested under serious pressure.

However, promising developments include growing awareness of structural risks among policymakers and researchers, early experiments in governance frameworks like responsible scaling policies, and increasing investment in AI safety research including interpretability and alignment work. Some international dialogue on AI governance continues despite broader geopolitical tensions, and civil society organizations are mobilizing around AI accountability and democratic governance issues.

Looking ahead 1-2 years, we expect continued rapid capability development with periodic attempts at voluntary coordination among leading labs. Regulatory frameworks will likely emerge in major jurisdictions but may struggle to keep pace with technological advancement. International coordination will probably remain limited to narrow technical cooperation rather than comprehensive governance regimes. The critical question is whether early warning signs of structural risks will motivate more serious coordination efforts or be dismissed as competitive disadvantage.

In the 2-5 year timeframe, the resolution of several key cruxes may become clearer. We will have better evidence on whether voluntary industry coordination can survive competitive pressures, whether human oversight can scale with AI capabilities, and whether institutions can develop adaptive governance mechanisms. The trajectory of US-China relations and broader geopolitical stability will significantly influence the possibility for international cooperation. Most importantly, we may see the first examples of AI systems with capabilities that clearly exceed human oversight capacity, forcing concrete decisions about acceptable risk levels and governance approaches.

Despite extensive analysis, fundamental uncertainties remain about structural risks from AI. We lack clear empirical metrics for measuring power concentration or institutional adaptation speed, making it difficult to distinguish normal technological disruption from qualitatively new structural changes. The interaction effects between technical AI capabilities and social dynamics are poorly understood, with most analysis based on speculation rather than rigorous empirical study.

The timeline for critical decisions remains highly uncertain. Some structural changes may happen gradually over decades, allowing time for institutional adaptation, while others could occur rapidly during periods of capability growth or geopolitical crisis. We also have limited understanding of which interventions would be most effective, with ongoing debates about whether technical solutions, governance frameworks, or democratic accountability measures should take priority.

Perhaps most fundamentally, the very definition and boundaries of structural risks remain contested. This conceptual uncertainty makes it difficult to design targeted interventions or evaluate progress. Resolution of these foundational questions will likely require both theoretical development and empirical evidence from AI deployment at scale—evidence that may come too late to prevent potentially harmful structural changes.


If you believe…Prioritize…
Structural risks are genuinely distinctGovernance and coordination research
AI concentrates power qualitatively moreAntitrust, redistribution, democratic governance
Racing is inevitableMaking racing safer; technical solutions
Coordination can succeedInvestment in diplomatic channels; voluntary commitments
International coordination is unlikelyDomestic governance; defensive measures
Winner-take-all dynamics likelyUrgent antitrust; open-source support
Lock-in would be permanentPrevention over adaptation; current values matter
Human oversight is feasibleInterpretability and evaluation research
Adaptation will lag dangerouslySlow AI development; build adaptive institutions