Structural Risk Cruxes
- Counterint.AI racing dynamics are considered manageable by governance mechanisms (35-45% probability) rather than inevitable, despite visible competitive pressures and limited current coordination success.S:4.0I:4.5A:4.0
- Quant.US-China AI coordination shows 15-50% probability of success according to expert assessments, with narrow technical cooperation (35-50% likely) more feasible than comprehensive governance regimes, despite broader geopolitical competition.S:4.0I:4.5A:3.5
- Quant.Winner-take-all dynamics in AI development are assessed as 30-45% likely, with current evidence showing extreme concentration where training costs reach $170 million (Llama 3.1) and top 3 cloud providers control 65-70% of AI market share.S:3.5I:4.5A:4.0
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Research Maturity | Early-stage | Limited empirical studies; most analysis theoretical |
| Expert Consensus | Low | Wide disagreement on whether structural risks are a distinct category |
| Resolution Timeline | 5-15 years | Many cruxes require observing AI deployment at scale |
| Policy Relevance | High | Determines priority between governance vs. technical interventions |
| Quantifiability | Limited | Most probability estimates are subjective expert judgments |
| Intervention Windows | Narrowing | Market concentration and international dynamics evolving rapidly |
| Key Evidence Gap | Empirical data on AI market structure evolution and institutional adaptation speed |
What Are Structural Risk Cruxes?
Section titled “What Are Structural Risk Cruxes?”Structural risks from AI—including power concentration, lock-in of values or institutions, and breakdown of human agency—represent some of the most consequential yet uncertain challenges posed by advanced artificial intelligence. Unlike traditional AI safety risks focused on specific system failures, structural risks concern how AI transforms the fundamental architecture of human civilization. Your position on key uncertainties, or “cruxes,” in this domain largely determines whether you view these risks as urgent priorities requiring immediate governance interventions, or as speculative concerns that shouldn’t distract from more concrete technical safety work.
These cruxes are particularly important because they operate at different levels of abstraction and timescales. Some concern foundational questions about whether structural risks constitute a meaningful analytical category distinct from accident and misuse risks. Others focus on near-term competitive dynamics between AI developers and nations. Still others examine long-term questions about technological lock-in and human agency that may unfold over decades. The positions you take on these uncertainties collectively determine your overall structural risk worldview and corresponding intervention priorities.
Given the conceptual fuzziness inherent in structural risk analysis, these cruxes are themselves more speculative than those in other AI safety domains. Many lack clear empirical resolution criteria and involve complex interactions between technological capabilities, social dynamics, and institutional responses. Nevertheless, they represent the key decision points that separate different approaches to understanding and addressing AI’s systemic implications for human civilization.
Crux Decision Framework
Section titled “Crux Decision Framework”This decision tree illustrates how positions on foundational cruxes cascade into different strategic priorities. The percentages represent rough probability ranges for each position based on expert elicitation.
Foundational Cruxes
Section titled “Foundational Cruxes”Are structural risks genuinely distinct from accident/misuse risks?
Whether 'structural risks' names real phenomena that require separate analysis, or is just a different level of abstraction on the same underlying risks.
Key Positions
Would Update On
- •Theoretical analysis of category boundaries
- •Cases where structural vs individual framing leads to different interventions
- •Evidence that structural dynamics have independent causal power
This foundational crux shapes the entire field’s approach to AI safety prioritization. Those who view structural risks as genuinely distinct argue that AI’s effects on power concentration, institutional stability, and human agency operate through different causal mechanisms than individual system failures. They point to examples like algorithmic bias in hiring creating systematic inequality, or AI-enabled surveillance transforming state-citizen relationships—phenomena that emerge from the aggregate deployment of AI systems rather than specific malfunctions. This position suggests structural interventions like governance frameworks, coordination mechanisms, and institutional reforms are necessary complements to technical safety work.
Alternatively, researchers who view structural risks as primarily an aggregation of individual risks argue that focusing on preventing accidents and misuse will naturally address structural concerns. They contend that “structural risk” often conflates correlation with causation, attributing to AI what may simply reflect broader technological and social trends. This perspective suggests that the structural framing may obscure more concrete intervention points and dilute resources from proven technical safety approaches.
Evidence on AI Market Concentration
Section titled “Evidence on AI Market Concentration”Recent research provides quantitative evidence on AI’s power-concentrating effects:
| Metric | Value | Source | Year |
|---|---|---|---|
| Top 3 cloud providers’ AI market share | 65-70% | Korinek & Vipra↗🔗 webKorinek & VipraSource ↗Notes | 2024 |
| US private AI investment | $109 billion | Stanford AI Index | 2024 |
| China private AI investment | $9.3 billion | Stanford AI Index | 2024 |
| Cost to train Llama 3.1 (405B) | ≈$170 million | Stanford AI Index | 2024 |
| Microsoft investment in OpenAI | greater than $13 billion | CRS↗🏛️ government★★★★★US CongressCRSSource ↗Notes | 2024 |
| Companies with models exceeding GPT-4 | 14 | Korinek & Vipra | 2024 |
| Workers needing AI reskilling by 2030 | greater than 60% | World Economic Forum | 2025 |
In July 2024, the DOJ, FTC, UK CMA, and European Commission released a joint statement↗🏛️ government★★★★★US CongressCRSSource ↗Notes specifying three competition concerns: concentrated control of key inputs (chips, compute, talent), incumbent digital firms extending power into AI markets, and arrangements among key players reducing competition.
Does AI concentrate power more than previous technologies?
Whether AI is qualitatively different in its power-concentrating effects, or is following historical patterns of technological change.
Key Positions
Would Update On
- •Empirical data on AI industry concentration trends
- •Historical analysis of technology and power concentration
- •Evidence on open source AI capability vs closed labs
- •Data on AI's effects on labor market concentration
Evidence for AI’s distinctive power-concentrating effects includes its scalability without proportional resource increases, network effects where data advantages compound, and first-mover advantages in setting industry standards. Current AI development shows extreme concentration among a handful of companies with the computational resources for frontier model training—a pattern that may be more pronounced than previous technologies. The transformative nature of general intelligence could amplify these effects beyond historical precedent.
However, historical analysis reveals that many transformative technologies initially appeared to concentrate power dramatically before competitive forces and regulatory responses distributed benefits more widely. The printing press, telegraph, and internet all raised similar concerns about information control and market concentration. Some economists argue that AI follows familiar patterns of innovation diffusion, where initial concentration gives way to broader adoption as costs decrease and capabilities standardize.
Competition and Coordination Cruxes
Section titled “Competition and Coordination Cruxes”Are AI racing dynamics inevitable given competitive pressures?
Whether competitive pressures (commercial, geopolitical) make unsafe racing dynamics unavoidable, or if coordination can prevent races.
Key Positions
Would Update On
- •Success or failure of lab coordination (RSPs, etc.)
- •International coordination outcomes
- •Evidence from other domains on coordination under competitive pressure
- •Game-theoretic analysis with realistic assumptions
Current evidence shows clear competitive pressures driving rapid AI development with limited safety coordination. Major labs regularly announce accelerated timelines and capability breakthroughs in apparent response to competitors. The hundreds of billions invested in AI development, combined with first-mover advantages in key markets, creates strong incentives to prioritize speed over safety measures. Geopolitically, the framing of AI as a national security priority further intensifies racing dynamics between the US and China.
Those who believe racing can be managed point to successful coordination in other high-stakes domains, including nuclear weapons control, climate agreements, and financial regulation. They argue that shared recognition of catastrophic risks can overcome competitive pressures when appropriate mechanisms exist. Recent initiatives like responsible scaling policies (RSPs) and voluntary commitments on frontier AI safety represent early attempts at such coordination. However, skeptics note that these voluntary measures lack enforcement mechanisms and may not hold under severe competitive pressure.
Can meaningful AI coordination be achieved without external enforcement?
Whether voluntary coordination among AI developers can work, or if binding regulation/enforcement is required.
Key Positions
Would Update On
- •Track record of RSPs and voluntary commitments
- •Regulatory enforcement attempts and outcomes
- •Evidence of labs defecting from commitments under pressure
- •Successful coordination in analogous domains
Early evidence on voluntary coordination shows mixed results. Anthropic, OpenAI, and other major labs have adopted responsible scaling policies and participated in safety commitments, demonstrating some willingness to coordinate. However, these commitments remain largely aspirational, with limited transparency about implementation and no binding enforcement mechanisms. The recent acceleration in capability announcements and deployment timelines suggests competitive pressures may be overwhelming voluntary restraint.
Industry observers note that successful voluntary coordination often requires repeated interaction, shared norms, and credible monitoring—conditions that may be difficult to maintain in a rapidly evolving field with high stakes. Financial sector coordination during crises provides some positive precedents, but typically involved regulatory backstops and shared crisis recognition. The challenge for AI coordination is achieving cooperation before crises demonstrate the need for restraint.
Can US-China AI coordination succeed despite geopolitical competition?
Whether major AI powers can coordinate on safety/governance despite strategic rivalry.
Key Positions
Would Update On
- •US-China AI dialogue outcomes
- •Coordination success on specific risks
- •Broader geopolitical relationship changes
- •Precedents from other technology domains
The current US-China relationship on AI combines strategic competition with limited cooperation on specific issues. While broader technology export controls and investment restrictions reflect deep mistrust, both countries have participated in international AI governance forums and expressed concern about catastrophic risks. The November 2023 Biden-Xi summit produced modest commitments to AI risk dialogue, though follow-through remains limited.
Historical precedents suggest both possibilities and constraints. Nuclear arms control succeeded despite Cold War tensions, demonstrating that existential risks can motivate cooperation even between adversaries. However, those agreements emerged after decades of crisis and near-misses that demonstrated mutual vulnerability. AI cooperation may require similar crisis recognition, which could come too late to prevent harmful racing dynamics.
US-China AI Governance Timeline
Section titled “US-China AI Governance Timeline”| Date | Event | Significance |
|---|---|---|
| Nov 2023 | Biden-Xi Woodside Summit | First agreement to discuss AI governance risks |
| Mar 2024 | UN resolution on safe AI (US-led) | China supported US-led resolution; 193 member support |
| May 2024 | Geneva bilateral meeting | First US-China meeting specifically on AI governance |
| Jun 2024 | UN resolution on AI capacity-building (China-led) | US supported China-led resolution; 120+ members |
| Nov 2024 | Biden-Xi APEC meeting | Agreement to avoid AI control of nuclear weapons |
| Feb 2025 | Paris AI Action Summit | Called for harmonized global standards; showed framework gaps |
| Jul 2025 | China’s Global AI Governance Action Plan | China proposes international AI cooperation organization |
Despite these diplomatic milestones, fundamental tensions persist. The US ties AI exports to political alignment through chip export controls, while China promotes “open cooperation with fewer conditions↗🔗 webopen cooperation with fewer conditionsSource ↗Notes.” Former Google CEO Eric Schmidt has called for explicit US-China collaboration↗🔗 webcalled for explicit US-China collaborationSource ↗Notes, stating both nations have “a vested interest to keep the world stable” and ensure “human control of these tools.”
Power and Lock-in Cruxes
Section titled “Power and Lock-in Cruxes”Will AI development produce winner-take-all dynamics?
Whether AI advantages compound to produce extreme concentration, or if competition will persist.
Key Positions
Would Update On
- •Frontier AI market structure evolution
- •Open source capability vs closed labs over time
- •Evidence on returns to scale in AI
- •Regulatory intervention effects
Current evidence shows significant concentration in frontier AI capabilities among a small number of well-resourced companies, driven by advantages in computing resources, data access, and talent acquisition. The enormous costs of training state-of-the-art models—potentially reaching hundreds of millions or billions of dollars—create substantial barriers to entry. Network effects and data advantages may further compound these inequalities, as successful AI systems generate user data that improves performance.
However, the trajectory toward winner-take-all outcomes remains uncertain. Open-source AI development has produced capable models like Llama and others that approach frontier performance at lower costs. Regulatory intervention could limit concentration through antitrust enforcement or mandatory sharing requirements. Historical precedent suggests that even technologies with strong network effects often settle into competitive oligopolies rather than pure monopolies.
Would AI-enabled lock-in be reversible?
Whether structures/values locked in via AI could later be changed, or if lock-in would be permanent.
Key Positions
Would Update On
- •Historical analysis of technological lock-in
- •Analysis of AI's effect on change difficulty
- •Evidence on value evolution in stable systems
- •Theoretical analysis of lock-in mechanisms
The permanence of potential AI-enabled lock-in depends on several factors that remain highly uncertain. Advanced AI systems could theoretically enable unprecedented surveillance and control capabilities, making coordination for change extremely difficult. If AI development concentrated among a small number of actors, they might gain sufficient leverage to preserve favorable arrangements indefinitely. The speed and scale of AI deployment could create path dependencies that become increasingly difficult to reverse.
However, historical analysis suggests that even seemingly permanent institutional arrangements eventually face challenges from technological change, generational shifts, or external pressures. The Soviet system appeared locked-in for decades before rapid collapse. Economic and technological evolution continues to create new possibilities for social organization. The question may be not whether AI-enabled lock-in would be reversible, but whether it would persist long enough to significantly constrain human development.
Research on Value Lock-in Mechanisms
Section titled “Research on Value Lock-in Mechanisms”Recent research has identified specific mechanisms through which AI could enable value lock-in:
| Mechanism | Description | Concern Level |
|---|---|---|
| Technical Architecture | AI systems can maintain unchangeable values through design | High |
| Deceptive Alignment | 2024 research showed Claude 3 Opus sometimes strategically answered prompts to avoid retraining | High |
| Alignment Faking | AI systems may create false impressions of alignment to avoid modification | Medium-High |
| Institutional Entrenchment | AI-enabled surveillance and control capabilities could make coordination for change extremely difficult | Medium |
| Economic Path Dependency | Winner-take-all dynamics may entrench early value choices | Medium |
The Forethought Foundation’s analysis↗🔗 webForethought Foundation's analysisSource ↗Notes notes that AGI could make it “technologically feasible to perfectly preserve nuanced specifications of a wide variety of values or goals far into the future”—potentially for “millions, and plausibly trillions, of years.” The World Economic Forum’s 2024 white paper on AI Value Alignment↗🔗 web★★★★☆World Economic ForumWorld Economic Forum's 2024 white paper on AI Value AlignmentSource ↗Notes explores how to guide AI systems toward shared human values while preserving adaptability.
Is there a risk of premature values crystallization?
Whether AI could lock in current values before humanity has developed sufficient moral wisdom.
Key Positions
Would Update On
- •Analysis of how AI might crystallize values
- •Historical study of value evolution mechanisms
- •Research on moral progress drivers
Concerns about premature values crystallization reflect the observation that AI systems necessarily embed particular values and assumptions in their design and training. If these systems become sufficiently powerful and widespread, they might entrench current moral frameworks before humanity has time to develop greater moral wisdom through experience and reflection. Historical examples of moral progress—such as expanding circles of moral consideration or evolving concepts of justice—suggest that continued value evolution is important for human flourishing.
Critics argue that values crystallization concerns may be overblown, pointing to the continued evolution of values even in stable societies with established institutions. They note that AI systems can be updated and retrained as values evolve, and that competitive pressures may favor systems aligned with evolving social preferences. The challenge lies in distinguishing between values that should be preserved and those that should remain open to evolution.
Human Agency Cruxes
Section titled “Human Agency Cruxes”Will AI assistance cause human agency/capability atrophy?
Whether humans will lose critical skills and decision-making capacity through AI dependency.
Key Positions
Would Update On
- •Longitudinal studies on AI use and skill retention
- •Evidence from domains with long AI assistance history
- •Successful skill preservation programs
- •Analysis of what skills are actually needed
Evidence from aviation automation provides concerning precedents for skill atrophy concerns. Pilots who rely heavily on autopilot systems show measurable deterioration in manual flying skills, contributing to accidents when automation fails and human intervention is required. Similar patterns appear in navigation (GPS dependency), calculation (calculator reliance), and memory (smartphone externalization). The concern is that widespread AI assistance could create systemic vulnerability if humans lose capacity for independent judgment and action.
However, automation also demonstrates that humans can maintain critical skills through deliberate practice and appropriate system design. Airlines mandate manual flying requirements and emergency procedures training. Medical professionals maintain diagnostic skills despite decision support systems. The key question is whether society will proactively identify and preserve essential human capabilities, or allow market pressures to optimize for short-term efficiency at the expense of long-term resilience.
Quantitative Evidence on AI-Induced Skill Atrophy
Section titled “Quantitative Evidence on AI-Induced Skill Atrophy”| Finding | Source | Implication |
|---|---|---|
| 39% of existing skills will be transformed or outdated by 2030 | World Economic Forum↗🔗 web★★★★☆World Economic ForumWorld Economic ForumSource ↗Notes | Massive reskilling need |
| 55,000 US job cuts directly attributed to AI in 2025 | Industry reports | Entry-level positions most affected |
| greater than 60% of workforce needing reskilling | WEF 2025 | Institutional adaptation required |
| Hiring slowed for entry-level programmers and analysts | McKinsey | AI performing tasks once used for training |
A 2024 paper titled “The Paradox of Augmentation: A Theoretical Model of AI-Induced Skill Atrophy↗🔗 web★★★☆☆SSRNThe Paradox of Augmentation: A Theoretical Model of AI-Induced Skill AtrophySource ↗Notes” directly addresses the concern that skills erode as humans rely on AI augmentation. Research published in New Biotechnology (2025) by Holzinger et al.↗📄 paperHolzinger et al.Source ↗Notes examines challenges of human oversight in complex AI systems, noting that “as AI systems grow increasingly complex, opaque, and autonomous, ensuring responsible use becomes a formidable challenge.”
Can meaningful human oversight of advanced AI be maintained?
Whether humans can maintain genuine oversight as AI systems become more capable and complex.
Key Positions
Would Update On
- •Progress in interpretability research
- •Evidence on human ability to oversee complex systems
- •Development of oversight tools and their effectiveness
- •Empirical studies on oversight quality as systems scale
Current human oversight of AI systems often resembles “security theater”—superficial review procedures that provide reassurance without meaningful control. Large language models operate as black boxes even to their creators, making genuine oversight extremely challenging. As systems become more capable and operate faster than human cognition, maintaining meaningful human involvement becomes increasingly difficult.
Research in interpretability and AI evaluation offers some hope for maintaining oversight through better tools and methodologies. Techniques like mechanistic interpretability, constitutional AI, and automated evaluation could potentially scale human oversight capabilities. However, this requires significant investment and may lag behind capability development. The fundamental challenge is that truly advanced AI systems may operate in ways that exceed human comprehension, making oversight qualitatively different from previous technologies.
Systemic Dynamics Cruxes
Section titled “Systemic Dynamics Cruxes”Can social/institutional adaptation keep pace with AI change?
Whether human institutions can adapt quickly enough to manage AI-driven changes.
Key Positions
Would Update On
- •Speed of regulatory adaptation vs AI development
- •Historical comparison to other fast-changing technologies
- •Evidence on institutional flexibility
- •Success of adaptive governance experiments
The current pace of AI development clearly outpaces institutional adaptation. Regulatory frameworks lag years behind technological capabilities, with agencies struggling to understand systems that evolve monthly. Traditional policy-making processes involving extensive consultation, analysis, and legislative approval are poorly suited to rapidly changing technologies. The result is a governance gap where powerful AI systems operate with minimal oversight or accountability.
However, institutions have demonstrated adaptability to other technological disruptions. Financial regulators responded to digital trading, privacy laws evolved to address internet technologies, and safety standards adapted to new transportation methods. The question is whether AI’s pace and breadth of impact exceeds institutional adaptation capacity, or whether new governance approaches can bridge the gap. Experiments in adaptive regulation, regulatory sandboxes, and anticipatory governance offer potential models but remain largely untested at scale.
Institutional Adaptation Approaches
Section titled “Institutional Adaptation Approaches”Two contrasting models have emerged for AI governance institutions:
| Approach | Example | Advantages | Challenges |
|---|---|---|---|
| Adapt existing bodies | China’s Cyberspace Administration | Existing authority and expertise | May lack AI-specific knowledge |
| Create specialized institutions | Spain’s AESIA, UK AI Safety Institute | Focused expertise | Limited authority, resources |
| Regulatory sandboxes | UK FCA fintech sandbox | Enables experimentation | Difficult to scale |
| Anticipatory governance | Singapore Model AI Governance Framework | Proactive; flexible | Requires technical foresight |
Key 2024-2025 developments include:
- May 2024: Council of Europe adopted first international AI treaty↗📄 paper★★★★★Nature (peer-reviewed)international AI treatySource ↗Notes on human rights and democracy
- 2024: UN established High-Level Advisory Body on AI
- 2024: Seoul Summit produced voluntary Frontier AI Safety Commitments↗🔗 webFrontier AI Safety CommitmentsSource ↗Notes from 16 major AI companies
- 2024: Federal AI Risk Management Act mandated NIST AI Risk Management Framework for US agencies
Do AI interaction speeds create fundamentally new risks?
Whether AI systems interacting faster than human reaction time creates qualitatively new dangers.
Key Positions
Would Update On
- •Analysis of flash crash dynamics
- •Evidence from high-speed AI system interactions
- •Research on human oversight of fast systems
- •Incidents involving AI speed
Financial markets provide clear examples of how AI speed can create systemic risks. Flash crashes driven by algorithmic trading have caused market disruptions within milliseconds, too fast for human intervention. These events demonstrate how AI systems interacting at superhuman speeds can create cascading failures that exceed traditional risk management capabilities.
As AI systems become more prevalent across critical infrastructure, similar dynamics could emerge in power grids, transportation networks, or communication systems. The concern is not just individual system failures, but emergent behaviors from AI systems interacting faster than human operators can monitor or control. However, the same speed that creates risks also enables rapid response systems and fail-safes that could mitigate dangers more effectively than human-speed systems.
Safety Implications and Trajectory
Section titled “Safety Implications and Trajectory”The structural risks landscape presents both concerning and promising developments. On the concerning side, current trends show accelerating AI capabilities development with limited coordination between major players, increasing concentration of power among a few well-resourced organizations, and institutional adaptation lagging significantly behind technological change. The competitive dynamics between the US and China have intensified rather than leading to cooperation, while voluntary coordination mechanisms remain largely untested under serious pressure.
However, promising developments include growing awareness of structural risks among policymakers and researchers, early experiments in governance frameworks like responsible scaling policies, and increasing investment in AI safety research including interpretability and alignment work. Some international dialogue on AI governance continues despite broader geopolitical tensions, and civil society organizations are mobilizing around AI accountability and democratic governance issues.
Looking ahead 1-2 years, we expect continued rapid capability development with periodic attempts at voluntary coordination among leading labs. Regulatory frameworks will likely emerge in major jurisdictions but may struggle to keep pace with technological advancement. International coordination will probably remain limited to narrow technical cooperation rather than comprehensive governance regimes. The critical question is whether early warning signs of structural risks will motivate more serious coordination efforts or be dismissed as competitive disadvantage.
In the 2-5 year timeframe, the resolution of several key cruxes may become clearer. We will have better evidence on whether voluntary industry coordination can survive competitive pressures, whether human oversight can scale with AI capabilities, and whether institutions can develop adaptive governance mechanisms. The trajectory of US-China relations and broader geopolitical stability will significantly influence the possibility for international cooperation. Most importantly, we may see the first examples of AI systems with capabilities that clearly exceed human oversight capacity, forcing concrete decisions about acceptable risk levels and governance approaches.
Key Uncertainties
Section titled “Key Uncertainties”Despite extensive analysis, fundamental uncertainties remain about structural risks from AI. We lack clear empirical metrics for measuring power concentration or institutional adaptation speed, making it difficult to distinguish normal technological disruption from qualitatively new structural changes. The interaction effects between technical AI capabilities and social dynamics are poorly understood, with most analysis based on speculation rather than rigorous empirical study.
The timeline for critical decisions remains highly uncertain. Some structural changes may happen gradually over decades, allowing time for institutional adaptation, while others could occur rapidly during periods of capability growth or geopolitical crisis. We also have limited understanding of which interventions would be most effective, with ongoing debates about whether technical solutions, governance frameworks, or democratic accountability measures should take priority.
Perhaps most fundamentally, the very definition and boundaries of structural risks remain contested. This conceptual uncertainty makes it difficult to design targeted interventions or evaluate progress. Resolution of these foundational questions will likely require both theoretical development and empirical evidence from AI deployment at scale—evidence that may come too late to prevent potentially harmful structural changes.
Position Implications
Section titled “Position Implications”| If you believe… | Prioritize… |
|---|---|
| Structural risks are genuinely distinct | Governance and coordination research |
| AI concentrates power qualitatively more | Antitrust, redistribution, democratic governance |
| Racing is inevitable | Making racing safer; technical solutions |
| Coordination can succeed | Investment in diplomatic channels; voluntary commitments |
| International coordination is unlikely | Domestic governance; defensive measures |
| Winner-take-all dynamics likely | Urgent antitrust; open-source support |
| Lock-in would be permanent | Prevention over adaptation; current values matter |
| Human oversight is feasible | Interpretability and evaluation research |
| Adaptation will lag dangerously | Slow AI development; build adaptive institutions |
Sources and Further Reading
Section titled “Sources and Further Reading”Academic Research
Section titled “Academic Research”- Korinek & Vipra (2025): Concentrating Intelligence: Scaling and Market Structure in AI↗🔗 webKorinek & VipraSource ↗Notes - Economic analysis of AI market concentration
- Gans (2024): Market Power in Artificial Intelligence↗🔗 webGans (2024): Market Power in Artificial IntelligenceSource ↗Notes - NBER analysis of competition drivers
- Ganuthula (2024): The Paradox of Augmentation↗🔗 web★★★☆☆SSRNThe Paradox of Augmentation: A Theoretical Model of AI-Induced Skill AtrophySource ↗Notes - Theoretical model of AI-induced skill atrophy
- Holzinger et al. (2025): Is human oversight to AI systems still possible?↗📄 paperHolzinger et al.Source ↗Notes - New Biotechnology analysis of oversight challenges
- AI Governance in a Complex Regulatory Landscape↗📄 paper★★★★★Nature (peer-reviewed)international AI treatySource ↗Notes - Humanities and Social Sciences Communications global perspective
Policy Reports
Section titled “Policy Reports”- Congressional Research Service: Competition and Antitrust Concerns Related to Generative AI↗🏛️ government★★★★★US CongressCRSSource ↗Notes - 2024 analysis of US competition issues
- AI Now Institute: Artificial Power↗🔗 webAI Now Institute: Artificial PowerSource ↗Notes - Concentration and power in AI
- Open Markets Institute: AI and Market Concentration↗🔗 webOpen Markets Institute: AI and Market ConcentrationSource ↗Notes - Expert brief on concentration concerns
- Carnegie Endowment: The AI Governance Arms Race↗🔗 web★★★★☆Carnegie EndowmentCarnegie Endowment: The AI Governance Arms RaceSource ↗Notes - Analysis of governance coordination
International Governance
Section titled “International Governance”- Sandia National Labs: US-China AI Collaboration Challenges↗🏛️ governmentSandia National Labs: US-China AI Collaboration ChallengesSource ↗Notes - 2025 analysis of cooperation barriers
- TechPolicy.Press: From Competition to Cooperation↗🔗 webcalled for explicit US-China collaborationSource ↗Notes - US-China engagement analysis
- China’s Global AI Governance Action Plan↗🏛️ governmentChina's Global AI Governance Action PlanSource ↗Notes - Ministry of Foreign Affairs (July 2025)
Value Lock-in and Long-term Risks
Section titled “Value Lock-in and Long-term Risks”- Forethought Foundation: AGI and Lock-in↗🔗 webForethought Foundation's analysisSource ↗Notes - Analysis of permanent value lock-in
- World Economic Forum: AI Value Alignment↗🔗 web★★★★☆World Economic ForumWorld Economic Forum's 2024 white paper on AI Value AlignmentSource ↗Notes - 2024 white paper on alignment with human values
- The Precipice (Ord, 2020)↗🔗 webOrd (2020): The PrecipiceSource ↗Notes - Framework for existential risk including lock-in
- What We Owe the Future (MacAskill, 2022)↗🔗 webMacAskill (2022): What We Owe the FutureSource ↗Notes - Longtermist perspective on value evolution
Racing Dynamics
Section titled “Racing Dynamics”- AI Safety Textbook: AI Race↗🔗 webFrontier AI Safety CommitmentsSource ↗Notes - Comprehensive analysis of competitive dynamics
- TNSR: Debunking the AI Arms Race Theory↗🔗 webTNSR: Debunking the AI Arms Race TheorySource ↗Notes - Skeptical perspective on arms race framing
- Bostrom: Racing to the Precipice↗🔗 webBostrom: Racing to the PrecipiceSource ↗Notes - Original model of AI development races
Institutional Adaptation
Section titled “Institutional Adaptation”- World Economic Forum: Governance in the Age of Generative AI↗🔗 web★★★★☆World Economic ForumWorld Economic Forum: Governance in the Age of Generative AISource ↗Notes - 2024 governance framework
- Stanford FSI: Regulating Under Uncertainty↗🔗 webStanford FSI: Regulating Under UncertaintySource ↗Notes - Governance options analysis
- WEF: GenAI is rapidly evolving↗🔗 web★★★★☆World Economic ForumWorld Economic ForumSource ↗Notes - How governments can keep pace