Skip to content
Longterm Wiki
Back

Author

Sumaya Nur Adan

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Institute for AI Policy and Strategy

Published by IAPS (Institute for AI Policy and Strategy), this analysis is relevant for understanding how international AI safety governance structures are being designed, particularly for those tracking the AISI network that emerged from the Bletchley AI Safety Summit process.

Metadata

Importance: 62/100organizational reportanalysis

Summary

This IAPS analysis proposes a governance structure for the International Network of AI Safety Institutes, recommending a tiered membership model and collaborative mechanisms for standards development, information sharing, and safety evaluations. It aims to coordinate AI safety efforts across national AI Safety Institutes globally to improve collective oversight of frontier AI systems.

Key Points

  • Proposes a tiered membership structure for the International Network of AISIs to accommodate varying levels of institutional maturity and commitment.
  • Prioritizes three core functions: harmonizing safety standards, facilitating information sharing between institutes, and coordinating safety evaluations of frontier models.
  • Recommends collaborative mechanisms to enable joint evaluations and mutual recognition of safety assessments across jurisdictions.
  • Aims to strengthen global AI safety governance by creating interoperability between national AI safety bodies.
  • Addresses how smaller or newer AI safety institutes can meaningfully participate without full equivalence to leading institutes like UK AISI or US AISI.

Review

The document presents a comprehensive exploration of how an International Network of AI Safety Institutes could effectively collaborate to address emerging AI safety challenges. The proposed framework emphasizes a tiered membership structure with core, associate, and observer members, allowing for flexible yet structured international cooperation. The key strengths of the proposed approach include its adaptability, focus on technical collaboration, and mechanisms for including diverse stakeholders while maintaining core members' decision-making authority. The recommended working groups and potential inclusion of entities like Chinese research institutions and AI companies demonstrate a nuanced approach to international AI safety governance. The document carefully balances the need for inclusivity with maintaining technical rigor and preventing potential conflicts of interest.

Cached Content Preview

HTTP 200Fetched Apr 7, 202644 KB
Key questions for the International Network of AI Safety Institutes — Institute for AI Policy and Strategy 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 

 
 
 

 
 
 
 
 
 
 

 
 
 
 
 
 

 
 0 
 
 
 
 
 

 

 

 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Key questions for the International Network of AI Safety Institutes

 
 
 
 
 Commentary 
 
 

 
 
 Nov 9 
 
 Written By Sumaya Nur Adan 
 
 
 
 

 
 
 

 
 
 Download as a PDF
 
 

 

 
 
 Authors: Sumaya Nur Adan , Oliver Guest , Renan Araujo 

 Executive Summary 

 In this commentary piece, we explore key questions for the International Network of AI Safety Institutes and suggest ways forward given the upcoming San Francisco convening on November 20-21, 2024.

 What kinds of work should the Network prioritize? 

 The network should prioritize topics that are urgent and important for AI safety, align well with AISIs’ competencies, and are elevated by collaboration, leading to more than the sum of each individual AISI. Examples include:

 Standards : Members should work towards consensus on some safety-relevant practices. We particularly suggest safety frameworks; many AI companies committed at the Seoul AI Summit to publish such frameworks, but there is limited consensus so far on what constitutes one.

 Information sharing : Members should identify kinds of information that should be shared between them and develop mechanisms to do so.

 Evaluations : Members should continue working on evaluations and share best practices for safety evaluations with each other and collaborate to improve them.

 What should the structure of the Network be? 

 We suggest a tiered membership structure: 

 Core members: Countries with established AI Safety Institutes or equivalent national bodies with full decision-making powers and contributions to the functioning of the Network.

 Associate members: Countries that have a nascent focus on AI safety with access to select shared resources and contributing via working groups.

 Observer members: Other countries, international organizations, academic institutions, and companies with relevant expertise. Participation in project-specific working groups providing expert input.

 
 A secretariat would provide central functions. It would involve different positions:

 Permanent positions : A small permanent component comprising representatives from countries with the most institutionally developed AI Safety Institutes. (This would probably be the UK and US AISIs given the current landscape.)

 Rotating positions : Additional rotating positions filled by other core members on a temporary basis (for example, if the US and the UK formed the permanent component, they could be joined by 1-3 rotating members.)

 Additional input from affiliate and observer members : Input mechanisms for the Network's various membership tiers and working groups to maintain the collaborative nature of the Network rat

... (truncated, 44 KB total)
Resource ID: 473d3df122573f58 | Stable ID: NmQ3NTVlZj