Skip to content
Longterm Wiki
Back

International Governance of AI

web

Author

Jon Truby

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Springer

Academic analysis of international governance strategies for transformative AI, examining frameworks from subnational to global levels and addressing unique challenges posed by AI's rapid development and dual-use potential.

Paper Details

Citations
0
Year
2025

Metadata

journal articleanalysis

Summary

The article explores various governance strategies for transformative AI, analyzing potential approaches from subnational norms to international regimes. It highlights the unique challenges of governing AI due to its rapid development, dual-use potential, and complex technological landscape.

Key Points

  • AI requires innovative governance due to its unique dual-use and rapidly evolving nature
  • Subnational governance alone is insufficient to manage transformative AI risks
  • Multiple governance approaches may be necessary, including national standards and international regimes
  • Controlling key infrastructure chokepoints could be crucial for effective AI governance

Review

This comprehensive analysis provides a nuanced examination of AI governance challenges, emphasizing the need for multi-layered, adaptive governance strategies. The authors argue that traditional governance models are insufficient for managing transformative AI, given its unprecedented combination of dual-use properties, ease of proliferation, and potential destructive capabilities. The research systematically evaluates governance options across different stages (development, proliferation, deployment) and actor levels (subnational, national, international). Key insights include identifying potential 'chokepoints' in AI infrastructure, recognizing the limitations of current subnational governance approaches, and proposing potential international governance frameworks like non-proliferation regimes or international monopolies. The analysis is particularly valuable for its sophisticated understanding of technological governance dynamics, emphasizing the complex interplay between technological innovation, economic incentives, and geopolitical strategic considerations.

Cached Content Preview

HTTP 200Fetched Apr 9, 20262 KB
# International governance of advancing artificial intelligence
Authors: Nicholas Emery-Xu, Richard Jordan, Robert Trager
Journal: AI & SOCIETY
Published: 2025-04
DOI: 10.1007/s00146-024-02050-7
## Abstract

Abstract New technologies with military applications may demand new modes of governance. In this article, we develop a taxonomy of technology governance forms, outline their strengths, and red-team their weaknesses. In particular, we consider the challenges and opportunities posed by advancing artificial intelligence, which is likely to have substantial dual-use properties. We conclude that subnational governance, though prevalent and mitigating some risks, is insufficient when the individual rewards from societally harmful actions outweigh normative sanctions, as is likely to be the case with AI. Nationally enforced standards are promising ways to govern AI deployment, but they are less viable in the “race-to-the-bottom” environments that are becoming common. When it comes to powerful technologies with military implications, there is only one multilateral option with a strong historical precedent: a non-proliferation plus norms-of-use regime, which we call NPT+. We believe that a non-proliferation regime may, therefore, be the necessary foundation for AI governance. However, AI may exhibit characteristics that would make a non-proliferation regime less effective than it has proven for nuclear weapons. As an alternative, verification-backed restrictions on AI development and use would address more risks, but they face challenges in the case of advanced AI, and we show how these challenges may not have technical solutions. Perhaps more importantly, we show that there is no clear example of major powers restricting the development of a powerful military technology when that technology lacks a ready substitute. We, therefore, turn to a final alternative, International Monopoly, which was the preferred solution of many scholars and policymakers in the early nuclear era. It should be considered again for governing AI: a monopoly would require less-invasive monitoring, though at the possible cost of eroding national sovereignty. Ultimately, we conclude that it is too soon to tell whether a non-proliferation regime, a verification-based regime, or an International Monopoly is most feasible for governing AI. Nonetheless, a variety of policies would yield a high return across all three scenarios, and we conclude by identifying some of these steps that could be taken today.
Resource ID: e2d123a136a4c4d4 | Stable ID: sid_y4stnoIgNX