Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Lawfare

Useful policy analysis for those exploring international AI governance frameworks, drawing direct parallels to nuclear and chemical weapons nonproliferation regimes and their real-world shortcomings.

Metadata

Importance: 62/100opinion pieceanalysis

Summary

This Lawfare analysis by Akash Wasil examines whether the International Atomic Energy Agency (IAEA) model could serve as a template for international AI governance, using case studies from Iran, Syria, and Russia to identify both the strengths and significant limitations of such institutions. The piece argues that any 'IAEA for AI' proposal must seriously grapple with well-documented verification and enforcement challenges faced by the IAEA and OPCW.

Key Points

  • Nations at the 2023 AI Safety Summit acknowledged potential for 'serious, even catastrophic harm' from advanced AI, including deliberate misuse for bioweapons and loss of human control.
  • Scholars and figures like Sam Altman have proposed using the IAEA as a model for international AI governance institutions.
  • Case studies from Iran, Syria, and Russia reveal significant verification and enforcement limitations in existing arms control institutions like the IAEA and OPCW.
  • The White House 2024 national security memorandum on AI already directs multilateral engagement strategies, signaling active interest in international AI governance frameworks.
  • Any viable international AI governance regime must account for the challenges of verifying compliance and enforcing agreements among competing nation-states.

Cited by 1 page

PageTypeQuality
International Compute RegimesConcept67.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202619 KB
Do We Want an “IAEA for AI”? | Lawfare
 


 



 
 

 
 
 
 
 
 
 
 
 

 
 
 
 Akash Wasil
 
 


 
 
 

 
 
 
 Meet The Authors 
 
 
 
 Subscribe to Lawfare 
 
 

 
 
 
 In November 2023, nations at the first global AI Safety Summit recognized the possibility of “serious, even catastrophic harm” from advanced artificial intelligence (AI). Some of the risks identified stem from deliberate misuse. For example, a nation could decide to instruct an advanced AI system to develop novel biological weapons or cyberweapons; Anthropic CEO Dario Amodei testified in 2023 that AI systems would be able to greatly expand threats from “large-scale biological attacks” within two to three years. Other risks mentioned arise from unintentional factors— experts have warned , for instance, that AI systems could become powerful enough to subvert human control. A race toward superintelligent AI could lead to the creation of highly powerful and dangerous systems before scientists have developed the safeguards and technical understanding required to control them . 

 Many proposals to mitigate these risks have focused on the importance of international coordination. The recent White House national security memorandum on AI , for example, directs the Department of State to form an international AI governance strategy that outlines multilateral engagement with allies, partners, and competitors. As international AI governance discussions advance, nations may consider how certain kinds of dangerous AI development could be restricted and how such agreements could be verified . 

 Accordingly, some scholars —and public figures such as OpenAI CEO Sam Altman —have turned to the International Atomic Energy Agency (IAEA) as a potential model for international AI institutions . But is an “IAEA for AI” desirable? 

 International institutions like the IAEA serve an important function, but they also have limitations that should be considered when thinking about international AI governance. To demonstrate the strengths and weaknesses of this model, I examine several case studies of the IAEA and the similarly structured Organization for the Prohibition of Chemical Weapons, or OPCW (which is responsible for the verification and monitoring of chemical weapons). I focus on how these organizations responded to challenges in Iran, Syria, and Russia. These examples illustrate that IAEA for AI proposals must account for the well-documented challenges faced by the IAEA and OPCW.

 The Importance of Verifiable International Agreements 

 To mitigate the “ race to God-like AI ,” Ian Hogarth—chair of the U.K. AI Safety Institute—proposed an “Island model,” in which a joint international lab performs research on superintelligence in a highly secure facility. Demis Hassabis, CEO of Google DeepMind, recently expressed support for a similar model , claiming that a &ldqu

... (truncated, 19 KB total)
Resource ID: 6f171f833897de2c | Stable ID: sid_ZJehmUM3FE