International Coordination Mechanisms
- Quant.The International Network of AI Safety Institutes has a combined budget of approximately $150 million annually across 11 countries, which is dwarfed by private sector AI spending of over $100 billion annually, raising fundamental questions about their practical influence on AI development.S:4.0I:4.5A:4.0
- ClaimInformation sharing on AI safety research has high feasibility for international cooperation while capability restrictions have very low feasibility, creating a stark hierarchy where technical cooperation is viable but governance of development remains nearly impossible.S:3.5I:4.5A:4.5
- Counterint.The UK rebranded its AI Safety Institute to the 'AI Security Institute' in February 2025, pivoting from existential safety concerns to near-term security threats like cyber-attacks and fraud, signaling a potential fragmentation in international AI safety approaches.S:4.5I:4.0A:3.5
- Links6 links could use <R> components
International coordination represents one of the most challenging yet potentially crucial approaches to AI safety, involving the development of global cooperation mechanisms to ensure advanced AI systems are developed and deployed safely across all major AI powers. As AI capabilities advance rapidly across multiple nations—particularly the United States, China, and the United Kingdom—the absence of coordinated safety measures could lead to dangerous race dynamics where competitive pressures override safety considerations.
The fundamental challenge stems from the global nature of AI development combined with the potentially catastrophic consequences of misaligned advanced AI systems. Unlike previous technological risks that could be contained nationally, advanced AI capabilities and their risks are inherently global, requiring unprecedented levels of international cooperation in an era of heightened geopolitical tensions. The stakes are particularly high given that uncoordinated AI development could lead to a “race to the bottom” where safety precautions are sacrificed for competitive advantage.
Current efforts at international coordination show both promise and significant limitations. The AI Safety Summit series, beginning with the UK’s Bletchley Park summit↗🏛️ government★★★★☆UK Governmentgovernment AI policiesSource ↗Notes in November 2023, has brought together major AI powers but has largely remained at the level of symbolic commitments rather than substantive agreements. The Council of Europe’s Framework Convention on AI↗🔗 webCouncil of Europe: AI Treaty PortalSource ↗Notes, adopted in May 2024, represents the first legally binding international AI treaty. The emerging International Network of AI Safety Institutes↗🏛️ government★★★★★NISTInternational Network of AI Safety InstitutesSource ↗Notes represents a more technical approach to coordination, though their effectiveness remains to be demonstrated. Meanwhile, bilateral dialogues between the US and China on AI safety have begun but operate within the broader context of strategic competition that limits trust and information sharing.
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Tractability | Low-Medium (25-40% success probability) | Geopolitical tensions between US and China limit substantive cooperation; Council of Europe treaty has 17 signatories but weak enforcement |
| Impact if Successful | Very High (potential 40-60% reduction in racing risk) | Could prevent racing dynamics, establish global safety standards, enable coordinated response to AI incidents |
| Current Progress | Limited (15-25% of needed infrastructure) | Three major summits held (2023-2025); 11-country AI Safety Institute network formed; first binding treaty signed |
| Key Barriers | Geopolitical competition | US-China strategic rivalry; AI framed as national security issue in both countries; US/UK declined Paris declaration |
| Verification Challenges | High (less than 10% of nuclear-style verification feasible) | AI capabilities harder to monitor than nuclear/chemical weapons; no equivalent to IAEA inspections |
| Time Horizon | 5-15 years | Building international institutions comparable to nuclear governance took 25 years; UN Global Dialogue launched 2025 |
| Resource Requirements | High ($200-250M annually) | AI Safety Institutes: UK ≈$65M, US $47.7M requested, Canada $36M; treaty secretariats require additional funding |
| Global Participation | Growing (61 countries at Paris 2025) | Paris Summit drew 61 signatories including China, India, EU; up from 29 at Bletchley |
Comparative National Approaches to AI Governance
Section titled “Comparative National Approaches to AI Governance”The three major AI powers—the United States, European Union, and China—have adopted fundamentally different regulatory philosophies that reflect their distinct political systems, economic priorities, and cultural values. These divergent approaches create both challenges and opportunities for international coordination. Understanding these differences is essential for assessing the feasibility of various coordination mechanisms.
Regulatory Philosophy Comparison
Section titled “Regulatory Philosophy Comparison”| Dimension | European Union | United States | China |
|---|---|---|---|
| Regulatory Model | Comprehensive, risk-based framework | Decentralized, sector-specific | Centralized, state-led directives |
| Primary Legislation | EU AI Act (August 2024) | No unified federal law; NIST RMF, state laws, executive orders | Algorithmic Recommendation Rules (2022), Generative AI Measures (2023) |
| Risk Classification | Four tiers: unacceptable, high, limited, minimal | Varies by agency and sector | Aligned with national security and social stability priorities |
| Enforcement Body | European AI Office | Multiple agencies (FDA, FTC, NHTSA, etc.) | Cyberspace Administration of China (CAC) |
| Innovation Stance | Precautionary; ex-ante requirements | Permissive; sector-by-sector | Strategic; strong state support with content controls |
| Data Requirements | GDPR compliance, algorithmic impact assessments | Sector-specific; voluntary for most AI | Data localization; security reviews |
| Transparency | High; documentation and disclosure mandated | Variable; depends on sector | Limited; state oversight prioritized |
| Extraterritorial Reach | Strong (Brussels Effect) | Moderate (export controls) | Limited to domestic market |
Strengths and Weaknesses by Approach
Section titled “Strengths and Weaknesses by Approach”| Approach | Strengths | Weaknesses | Coordination Implications |
|---|---|---|---|
| EU (Comprehensive) | Clear rules; strong rights protection; international influence via Brussels Effect | May slow innovation; compliance costs; complex implementation | Could set global standards; others may resist adoption |
| US (Decentralized) | Flexibility; innovation-friendly; rapid adaptation | Inconsistent coverage; gaps in protection; state fragmentation | Harder to negotiate unified positions; industry-led standards |
| China (State-Led) | Rapid implementation; strategic coherence; strong enforcement capacity | Limited transparency; privacy concerns; political controls | Different governance values complicate alignment |
According to recent analysis, “Each regulatory system reflects distinct cultural, political and economic perspectives. Each also highlights differing regional perspectives on regulatory risk-benefit tradeoffs, with divergent judgments on the balance between safety versus innovation and cooperation versus competition.” The 2025 Government AI Readiness Index notes that the global AI leadership picture is “increasingly bipolar,” with the United States and China emerging as the two dominant forces.
Major International Coordination Mechanisms
Section titled “Major International Coordination Mechanisms”Current Framework Landscape
Section titled “Current Framework Landscape”| Mechanism | Type | Participants | Status (Dec 2025) | Binding? |
|---|---|---|---|---|
| Council of Europe AI Treaty↗🔗 webCouncil of Europe: AI Treaty PortalSource ↗Notes | Multilateral treaty | 17 signatories (US, UK, EU, Canada, Japan, Switzerland, others) | Open for signature Sep 2024; ratified by UK, France, Norway | Yes (first binding AI treaty) |
| International Network of AI Safety Institutes↗🏛️ government★★★★★NISTInternational Network of AI Safety InstitutesSource ↗Notes | Technical cooperation | 11 countries + EU | Inaugural meeting Nov 2024 | No |
| Bletchley Declaration↗🏛️ government★★★★☆UK Governmentgovernment AI policiesSource ↗Notes | Political declaration | 29 countries + EU | Signed Nov 2023 | No |
| Seoul Frontier AI Commitments↗🏛️ government★★★★☆UK GovernmentSeoul Frontier AI CommitmentsSource ↗Notes | Industry pledges | 16 major AI companies | May 2024 | No |
| G7 Hiroshima AI Process↗🔗 webG7 Hiroshima AI ProcessSource ↗Notes | Code of conduct | G7 members | Adopted Oct 2023 | No |
| US-China AI Dialogue | Bilateral | US, China | First meeting May 2024 | No |
| UN AI Advisory Body↗🔗 web★★★★☆United NationsUN AI Advisory BodySource ↗Notes | Multilateral | UN Member States | Final report Sep 2024 | No |
AI Safety Institute Network
Section titled “AI Safety Institute Network”The International Network of AI Safety Institutes↗🏛️ governmentInternational Network of AI Safety InstitutesSource ↗Notes, launched in November 2024, represents the most concrete technical cooperation mechanism:
| Institute | Country | Annual Budget | Focus Areas | Status |
|---|---|---|---|---|
| UK AI Security Institute | United Kingdom | ≈$65M (£50M) | Near-term security risks, model evaluations | Rebranded Feb 2025 |
| US CAISI (NIST) | United States | $47.7M (FY2025 request) | Standards, evaluation frameworks | Renamed 2025 |
| EU AI Office | European Union | ≈$8M | AI Act enforcement, standards | Operational since 2024 |
| AISI Japan | Japan | ≈$5M | Evaluations, safety research | Building capacity |
| AISI Korea | Republic of Korea | ≈$5M | Safety evaluations | Building capacity |
| AISI Singapore | Singapore | ≈$3M | Governance, evaluations | Building capacity |
| AISI Canada | Canada | ≈$36M (C$50M) | Safety standards | Announced Apr 2024 |
| AISI Australia | Australia | TBD | Safety research, risk response | Operational early 2026 |
| AISI France | France | ≈$5M | Safety research, EU coordination | Building capacity |
| AISI Kenya | Kenya | ≈$1M | Global South representation | Early stage |
| IndiaAI Safety Institute | India | TBD | Safe AI model application | Announced Jan 2025 |
The network announced $11 million in funding↗🔗 web$11 million in fundingSource ↗Notes for synthetic content research and completed its first multilateral model testing exercise at the November 2024 San Francisco convening.
2025 Institutional Developments
Section titled “2025 Institutional Developments”The landscape of international AI governance institutions underwent significant changes in 2025, reflecting evolving priorities and geopolitical dynamics.
UK AI Safety Institute Rebranding (February 2025): In a significant shift, the UK renamed its AI Safety Institute to the “AI Security Institute” at the Munich Security Conference. Technology Secretary Peter Kyle stated: “This change brings us into line with what most people would expect an Institute like this to be doing.” The rebranded institute now focuses on “serious AI risks with security implications”—including chemical and biological weapons development, cyber-attacks, and crimes such as fraud—rather than broader existential safety concerns. This pivot signals a potential divergence in international approaches, with the UK prioritizing near-term security threats over long-term alignment risks.
OECD G7 Hiroshima Reporting Framework (February 2025): The OECD launched the first global framework for companies to report on implementation of the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems. Major AI developers—including Amazon, Anthropic, Fujitsu, Google, KDDI Corporation, Microsoft, NEC Corporation, NTT, OpenAI, Preferred Networks, Rakuten Group, Salesforce, and Softbank—pledged to complete the inaugural framework by April 15, 2025. First reports were published in June 2025. This represents the first standardized monitoring mechanism for voluntary AI safety commitments, though enforcement remains limited to reputational incentives.
UN Global Dialogue on AI Governance (September 2025): Building on the Global Digital Compact adopted in 2024, the UN launched the Global Dialogue on AI Governance—described as “the world’s principal venue for collective focus on this transformative technology.” The initiative complements existing efforts at the OECD, G7, and regional organizations while providing an inclusive forum for developing nations. The UN also established the Independent International Scientific Panel on AI, comprising 40 expert members who will provide evidence-based insights on AI opportunities, risks, and impacts—sometimes likened to an “IPCC for AI.” Annual convenings are scheduled for the 2026 AI for Good Global Summit in Geneva and 2027 in New York.
G7 December 2025 Declaration: Meeting in Montreal, G7 Ministers responsible for industry, digital affairs, and technology adopted a joint declaration reaffirming commitment to risk-based approaches encompassing system transparency, technical robustness, and data quality. The declaration called for increased convergence of regulatory approaches at the international level through OECD work, aiming to limit fragmentation and secure cross-border investments.
| Development | Date | Significance | Limitations |
|---|---|---|---|
| UK AI Security Institute rebrand | Feb 2025 | Signals shift from existential to near-term security focus | May reduce coordination on alignment research |
| OECD Hiroshima Reporting Framework | Feb 2025 | First standardized monitoring; 13+ companies pledged | No enforcement mechanism; voluntary only |
| Switzerland signs CoE Treaty | Mar 2025 | 17th signatory; growing European consensus | US Senate ratification uncertain |
| EU proposes CoE Treaty ratification | Jun 2025 | Formal EU commitment to binding AI governance | Requires Council and Parliament approval |
| UN Global Dialogue launch | Sep 2025 | Inclusive global forum; 40-member Scientific Panel | US opposed multilateral mechanisms; non-binding |
| G7 Montreal Declaration | Dec 2025 | Regulatory convergence commitment | G7-only; excludes China |
| India AISI announcement | Jan 2025 | First major developing economy AISI | Budget and scope TBD |
Critical Cooperation Areas and Feasibility
Section titled “Critical Cooperation Areas and Feasibility”The landscape of potential international coordination varies dramatically in feasibility across different domains. Information sharing on AI safety research represents perhaps the most tractable area for cooperation, as it provides mutual benefits without requiring countries to limit their capabilities development. The establishment of common safety standards and evaluation protocols offers medium feasibility, building on existing precedents in other technology sectors while allowing countries to maintain competitive positions.
Cooperation Feasibility Matrix
Section titled “Cooperation Feasibility Matrix”| Cooperation Area | Feasibility | Current Status | Key Enablers | Key Barriers |
|---|---|---|---|---|
| Safety research sharing | High | Active via AISI network | Mutual benefit; low competitive cost | Classification concerns; IP protection |
| Evaluation standards | Medium-High | OECD framework launched Feb 2025 | Technical objectivity; industry interest | Different risk priorities; enforcement gaps |
| Incident reporting | Medium | No formal mechanism | Shared interest in avoiding catastrophe | Attribution challenges; competitive sensitivity |
| Crisis communication | Medium | Biden-Xi nuclear AI agreement (Nov 2024) | Nuclear precedent; mutual deterrence | Trust deficit; limited scope |
| Deployment standards | Medium | EU AI Act extraterritorial reach | Brussels Effect; market access | Sovereignty concerns; innovation impact |
| Capability restrictions | Low | US export controls (unilateral) | Security imperatives | Zero-sum framing; verification impossible |
| Development moratoria | Very Low | No serious proposals | Catastrophic risk awareness | First-mover advantages; enforcement |
However, coordination on capability restrictions faces significant challenges due to the dual-use nature of AI research and the perceived strategic importance of AI leadership. Export controls on AI hardware↗🏛️ government★★★★★US CongressCongressional Research ServiceSource ↗Notes, implemented primarily by the United States since 2022, illustrate both the potential and limitations of unilateral approaches—while they may slow capability development in target countries, they also reduce trust and may accelerate independent development efforts. According to RAND analysis↗🔗 web★★★★☆RAND CorporationRAND - Incentives for U.S.-China Conflict, Competition, and CooperationThe report examines potential U.S.-China dynamics around artificial general intelligence (AGI), highlighting both competitive tensions and cooperative opportunities across five ...Source ↗Notes, China’s AI ecosystem remains competitive despite US export controls, and DeepSeek’s founder has stated that “bans on shipments of advanced chips are the problem” rather than funding constraints.
Crisis communication mechanisms represent another medium-feasibility area for cooperation, drawing parallels to nuclear-era hotlines and confidence-building measures. Such mechanisms could prove crucial if advanced AI systems begin exhibiting concerning behaviors or if there are near-miss incidents that require coordinated responses. The November 2024 Biden-Xi agreement that “humans, not AI” should control nuclear weapons↗🔗 web★★★★☆Brookings Institution"humans, not AI" should control nuclear weaponsSource ↗Notes represents a modest but significant step in this direction.
International Coordination Landscape
Section titled “International Coordination Landscape”The following diagram illustrates the multi-layered architecture of international AI governance, from binding treaties to voluntary commitments:
The US-China Cooperation Dilemma
Section titled “The US-China Cooperation Dilemma”The central challenge for international AI coordination lies in US-China relations, as these two countries lead global AI development but operate within an increasingly adversarial strategic context. The feasibility of meaningful cooperation faces fundamental tensions between mutual interests in avoiding catastrophic outcomes and zero-sum perceptions of AI competition.
US-China AI Engagement Timeline
Section titled “US-China AI Engagement Timeline”| Date | Event | Significance |
|---|---|---|
| Nov 2023 | Xi-Biden APEC meeting | Commitment to establish AI dialogue |
| Nov 2023 | Both sign Bletchley Declaration | First joint safety commitment |
| May 2024 | First intergovernmental AI dialogue↗🔗 webFirst intergovernmental AI dialogueSource ↗Notes (Geneva) | Working-level technical discussions |
| Nov 2024 | Biden-Xi nuclear AI agreement | Agreement that humans control nuclear weapons |
| Jul 2025 | China publishes Global AI Governance Action Plan | Signals continued engagement interest |
Arguments for possible cooperation point to several factors: both countries have expressed concern about AI risks and have established government entities focused on AI safety; there are precedents for technical cooperation even during periods of broader competition, such as in climate research; and Chinese officials have engaged substantively in international AI safety discussions, suggesting genuine concern about risks rather than purely strategic positioning.
However, significant obstacles remain. The framing of AI as central to national security and economic competitiveness in both countries creates strong incentives against sharing information or coordinating on limitations. The broader deterioration in US-China relations since 2018 has created institutional barriers to cooperation, while mutual suspicions about intentions make verification and trust-building extremely difficult.
According to RAND researchers↗🔗 web★★★★☆RAND CorporationRAND researchersSource ↗Notes, “scoping an AI dialogue is difficult because ‘AI’ does not mean anything specific in many U.S.-China engagements. It means everything from self-driving cars and autonomous weapons to facial recognition, face-swapping apps, ChatGPT, and a potential robot apocalypse.”
The Biden administration’s approach combined competitive measures (export controls, investment restrictions) with selective engagement on shared challenges, but progress remained limited. Chinese participation in international AI safety discussions has increased, but substantive commitments remain vague, and there are questions about whether engagement reflects genuine safety concerns or strategic positioning.
Lessons from Nuclear Governance
Section titled “Lessons from Nuclear Governance”Historical comparisons to nuclear arms control offer both relevant precedents and important cautionary notes. According to RAND analysis on nuclear history and AI governance↗🔗 web★★★★☆RAND CorporationRAND analysis on nuclear history and AI governanceSource ↗Notes, the development of nuclear non-proliferation took approximately 25 years from the first atomic weapons to the NPT entering into force in 1970.
Transferable Lessons vs. Key Differences
Section titled “Transferable Lessons vs. Key Differences”| Dimension | Nuclear Governance | AI Governance | Implication |
|---|---|---|---|
| Verification | Physical inspections (IAEA) | No equivalent for AI capabilities | Harder to monitor compliance |
| Containment | Rare materials, specialized facilities | Widely distributed, software-based | Export controls less effective |
| State control | Governments control most capabilities | Private companies lead development | Different negotiating parties needed |
| Demonstrable harm | Hiroshima/Nagasaki demonstrated risks | AI harms remain speculative | Less urgency for cooperation |
| Timeline to develop | Years, billions of dollars | Months, millions of dollars | Faster proliferation |
| Dual-use nature | Clear weapons vs. energy distinction | Almost all AI research is dual-use | Harder to define restrictions |
According to the Finnish Institute of International Affairs↗🔗 webFinnish Institute of International AffairsSource ↗Notes, “compelling arguments have been made to state why nuclear governance models won’t work for AI: AI lacks state control, has no reliable verification tools, and is inherently harder to contain.”
However, some lessons remain transferable. The GovAI research paper on the Baruch Plan↗🏛️ government★★★★☆Centre for the Governance of AIGovAI research paper on the Baruch PlanSource ↗Notes notes that early cooperation attempts failed but built foundations for later success. Norm-building and stigmatization of dangerous practices can work even without enforcement, and crisis communication mechanisms (like nuclear hotlines) prove valuable during tensions.
Safety Implications and Risk Considerations
Section titled “Safety Implications and Risk Considerations”International coordination presents both promising and concerning implications for AI safety. On the positive side, coordinated approaches could prevent dangerous race dynamics that might otherwise pressure developers to cut safety corners in pursuit of competitive advantage. Shared safety research could accelerate the development of alignment techniques and safety evaluation methods, while coordinated deployment standards could ensure that safety considerations are maintained globally rather than just in safety-conscious jurisdictions.
However, coordination efforts also carry risks that must be carefully managed. Information sharing on AI capabilities could inadvertently accelerate dangerous capabilities development in countries with weaker safety practices. Coordination mechanisms might legitimize or strengthen authoritarian uses of AI by creating channels for technology transfer. There are also risks that coordination efforts could create false confidence or serve as cover for continued dangerous development practices.
The timing of coordination efforts matters significantly. Early coordination on safety research and standards may be more feasible and beneficial than attempts at capability restrictions, which become more difficult as strategic stakes increase. However, waiting too long to establish coordination mechanisms may mean they are unavailable when needed most urgently.
Current Trajectory and Near-Term Prospects
Section titled “Current Trajectory and Near-Term Prospects”AI Summit Series Evolution
Section titled “AI Summit Series Evolution”The international AI summit series has grown in scope but faces questions about substantive impact:
| Summit | Date | Signatories | Key Outcomes | Criticism |
|---|---|---|---|---|
| Bletchley (UK) | Nov 2023 | 29 countries + EU | Bletchley Declaration; AI Safety Institutes commitment | Symbolic only; no enforcement |
| Seoul (Korea) | May 2024 | 27 countries + EU | Frontier AI Safety Commitments↗🏛️ government★★★★☆UK GovernmentSeoul Frontier AI CommitmentsSource ↗Notes (16 companies) | Industry self-regulation |
| Paris (France) | Feb 2025 | 61 countries | $100M Current AI endowment; environmental coalition | US and UK declined to sign joint declaration |
| New Delhi (India) | Feb 2026 | TBD | AI Impact Summit—first Global South host | Pending |
The Paris AI Action Summit↗🔗 webParis AI Action SummitSource ↗Notes highlighted emerging tensions. While 58 countries signed a joint declaration on “Inclusive and Sustainable AI,” the US and UK refused to sign, citing lack of “practical clarity” on global governance. According to the Financial Times↗🔗 webFinancial TimesSource ↗Notes, the summit “highlighted a shift in the dynamics towards geopolitical competition” characterized as “a new AI arms race” between the US and China.
Anthropic CEO Dario Amodei reportedly called the Paris Summit a “missed opportunity”↗🔗 webcalled the Paris Summit a "missed opportunity"Source ↗Notes for addressing AI risks, with similar concerns voiced by David Leslie of the Alan Turing Institute and Max Tegmark of the Future of Life Institute.
Near-Term Outlook (2025-2027)
Section titled “Near-Term Outlook (2025-2027)”The trajectory of international AI coordination appears to be following a pattern of incremental institutionalization amid persistent geopolitical constraints. Several trends from 2025 are likely to continue:
Observed 2025 developments shaping future trajectory:
- UK pivot from “safety” to “security” framing may influence other national institutes
- OECD reporting framework provides template for monitoring voluntary commitments
- UN Global Dialogue and Scientific Panel creating inclusive multilateral venues
- Singapore-Japan joint testing report demonstrates practical AISI network cooperation
Most likely developments (2026-2027):
- AI Safety Institute network expansion (India hosting February 2026 AI Impact Summit—first Global South host)
- Trump-Xi exchange of visits planned for 2026 could include AI biosecurity cooperation per 2025 UN address
- EU AI Act enforcement creating de facto international standards via Brussels Effect
- UN Global Dialogue convenings in Geneva (2026) and New York (2027) with Scientific Panel reports
- Possible convergence of US CAISI and UK Security Institute on near-term threats
Key uncertainties:
- Impact of US political changes on export controls and international engagement
- Whether China will deepen or reduce participation in Western-led initiatives
- Whether a major AI incident could create momentum for stronger coordination
- Trajectory of UK security-focused approach vs broader safety concerns
The European Union’s AI Act↗🔗 webEU AI ActThe EU AI Act introduces the world's first comprehensive AI regulation, classifying AI applications into risk categories and establishing legal frameworks for AI development and...Source ↗Notes enforcement, which began in phases from August 2024, may create additional coordination opportunities through regulatory alignment, as companies seeking EU market access adopt its requirements globally. According to CSET’s analysis, understanding the underlying assumptions of different governance proposals is essential for navigating the increasingly complex international landscape.
Key Uncertainties and Research Questions
Section titled “Key Uncertainties and Research Questions”Several critical uncertainties shape the prospects for international AI coordination:
| Uncertainty | Current Assessment | Impact on Coordination |
|---|---|---|
| Is US-China cooperation possible? | Low probability of deep cooperation; working-level dialogue possible | Central to global coordination success |
| Can AI Safety Institutes influence development? | Unproven; budgets small relative to industry | Determines value of technical cooperation |
| Are verification mechanisms feasible? | Harder than nuclear/chemical; no good analogies | Limits enforceable agreements |
| Will AI incidents create cooperation windows? | Unknown; depends on incident severity/attribution | Could shift political feasibility rapidly |
| Will private sector or governments lead? | Currently mixed; companies have more technical capacity | Affects negotiating structures needed |
The effectiveness of technical cooperation through AI Safety Institutes is still being tested, with key questions about whether such cooperation can influence actual AI development practices or remains largely academic. The combined budget of the AI Safety Institute network (approximately $200-250 million annually following 2025 expansions) is dwarfed by private sector AI spending (over $100 billion annually), raising questions about their practical influence.
Questions about verification and compliance with international AI agreements remain largely theoretical but will become critical if more substantive agreements are attempted. According to research on AI treaty verification↗🔗 webresearch on AI treaty verificationSource ↗Notes, “substantial preparations are needed: (1) developing privacy-preserving, secure, and acceptably priced methods for verifying the compliance of hardware, given inspection access; and (2) building an initial, incomplete verification system, with authorities and precedents that allow its gaps to be quickly closed if and when the political will arises.”
The broader question of whether international coordination is necessary for AI safety depends partly on unresolved technical questions about AI alignment and control. If alignment problems prove tractable through purely technical means, the importance of international coordination may diminish. However, if alignment remains difficult or if powerful AI systems create new forms of risk, international coordination may prove essential regardless of its current political feasibility.
Sources and Further Reading
Section titled “Sources and Further Reading”Official Documents and Declarations
Section titled “Official Documents and Declarations”- The Bletchley Declaration↗🏛️ government★★★★☆UK Governmentgovernment AI policiesSource ↗Notes - UK Government (November 2023)
- Seoul Declaration for Safe, Innovative and Inclusive AI↗🏛️ government★★★★☆UK GovernmentSeoul Declaration for Safe, Innovative and Inclusive AISource ↗Notes - AI Seoul Summit (May 2024)
- Frontier AI Safety Commitments↗🏛️ government★★★★☆UK GovernmentSeoul Frontier AI CommitmentsSource ↗Notes - AI Seoul Summit (May 2024)
- Council of Europe Framework Convention on AI↗🔗 webCouncil of Europe: AI Treaty PortalSource ↗Notes - Council of Europe (May 2024)
- International Network of AI Safety Institutes Fact Sheet↗🏛️ government★★★★★NISTInternational Network of AI Safety InstitutesSource ↗Notes - US Commerce Department (November 2024)
Analysis and Research
Section titled “Analysis and Research”- A Roadmap for a US-China AI Dialogue↗🔗 web★★★★☆Brookings Institution"humans, not AI" should control nuclear weaponsSource ↗Notes - Brookings Institution
- Potential for U.S.-China Cooperation on Reducing AI Risks↗🔗 web★★★★☆RAND CorporationRAND - Incentives for U.S.-China Conflict, Competition, and CooperationThe report examines potential U.S.-China dynamics around artificial general intelligence (AGI), highlighting both competitive tensions and cooperative opportunities across five ...Source ↗Notes - RAND Corporation
- Insights from Nuclear History for AI Governance↗🔗 web★★★★☆RAND CorporationRAND analysis on nuclear history and AI governanceSource ↗Notes - RAND Corporation
- The AI Safety Institute International Network: Next Steps↗🔗 web★★★★☆CSISThe AI Safety Institute International Network: Next StepsSource ↗Notes - CSIS
- International Control of Powerful Technology: Lessons from the Baruch Plan↗🏛️ government★★★★☆Centre for the Governance of AIGovAI research paper on the Baruch PlanSource ↗Notes - GovAI
- Nuclear arms control policies and safety in AI↗🔗 webFinnish Institute of International AffairsSource ↗Notes - Finnish Institute of International Affairs
- U.S. Export Controls and China: Advanced Semiconductors↗🏛️ government★★★★★US CongressCongressional Research ServiceSource ↗Notes - Congressional Research Service
- AI Governance at the Frontier - CSET (November 2025)
- GovAI Research on International Governance - Centre for the Governance of AI
- Comparative Global AI Regulation - Policy perspectives from the EU, China, and the US
- 2025 Government AI Readiness Index - Oxford Insights
- Promising Topics for US-China Dialogues on AI Risks - ACM FAccT 2025
- How China and the US Can Make AI Safer for Everyone - The Diplomat (January 2026)
- Eight Ways AI Will Shape Geopolitics in 2026 - Atlantic Council
- Strengthening International Cooperation on AI - Brookings Institution
- The Annual AI Governance Report 2025 - ITU
Summit Coverage and News
Section titled “Summit Coverage and News”- Paris AI Action Summit Official Site↗🔗 webParis AI Action SummitSource ↗Notes - French Government
- Key Outcomes of the AI Seoul Summit↗🔗 webKey Outcomes of the AI Seoul SummitSource ↗Notes - techUK
- Did the Paris AI Action Summit Deliver?↗🔗 webcalled the Paris Summit a "missed opportunity"Source ↗Notes - The Future Society
- China and the United States Begin Official AI Dialogue↗🔗 webFirst intergovernmental AI dialogueSource ↗Notes - China US Focus
- Paris AI Summit: Why Won’t US, UK Sign Global AI Pact? - Al Jazeera
- UN Secretary-General Launches Global Dialogue on AI Governance - UN Press Release
- The UN’s New AI Governance Bodies Explained - World Economic Forum
- OECD Launches Hiroshima AI Process Reporting Framework - OECD
- How the G7’s New AI Reporting Framework Could Shape AI Governance - OECD.AI
- Global Landscape of AI Safety Institutes - All Tech Is Human
AI Transition Model Context
Section titled “AI Transition Model Context”International coordination mechanisms improve the Ai Transition Model through Civilizational CompetenceAi Transition Model FactorCivilizational CompetenceSociety's aggregate capacity to navigate AI transition well—including governance effectiveness, epistemic health, coordination capacity, and adaptive resilience.:
| Factor | Parameter | Impact |
|---|---|---|
| Civilizational CompetenceAi Transition Model FactorCivilizational CompetenceSociety's aggregate capacity to navigate AI transition well—including governance effectiveness, epistemic health, coordination capacity, and adaptive resilience. | International CoordinationAi Transition Model ParameterInternational CoordinationThis page contains only a React component placeholder with no actual content rendered. Cannot assess importance or quality without substantive text. | Treaties and networks address global collective action problems in AI safety |
| Transition TurbulenceAi Transition Model FactorTransition TurbulenceThe severity of disruption during the AI transition period—economic displacement, social instability, and institutional stress. Distinct from long-term outcomes. | Racing IntensityAi Transition Model ParameterRacing IntensityThis page contains only React component imports with no actual content about racing intensity or transition turbulence factors. It appears to be a placeholder or template awaiting content population. | Coordinated standards could reduce destructive race dynamics |
| Civilizational CompetenceAi Transition Model FactorCivilizational CompetenceSociety's aggregate capacity to navigate AI transition well—including governance effectiveness, epistemic health, coordination capacity, and adaptive resilience. | Institutional QualityAi Transition Model ParameterInstitutional QualityThis page contains only a React component import with no actual content rendered. It cannot be evaluated for substance, methodology, or conclusions. | 11-country AI Safety Institute Network builds cross-border evaluation capacity |
Low-medium tractability due to US-China tensions, but very high impact potential if successful; information sharing is most feasible while capability restrictions face significant barriers.