Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: OECD

Relevant for understanding institutional efforts to coordinate AI safety governance internationally; the AISI Network is a direct outgrowth of the 2023 Bletchley Declaration and connects bodies like the UK AISI, US AISI (AISI at NIST), and counterparts in other nations.

Metadata

Importance: 58/100organizational reportanalysis

Summary

The AISI International Network, launched in May 2024, is a multilateral initiative connecting national AI Safety Institutes to coordinate on safe and trustworthy AI development. It facilitates knowledge sharing, joint evaluations, and harmonized governance approaches across member countries. The network represents a key institutional mechanism for translating AI safety research into coordinated international policy.

Key Points

  • Launched May 2024 to connect national AI Safety Institutes (AISIs) across multiple countries for coordinated AI safety efforts.
  • Focuses on knowledge sharing, joint research, and aligning governance frameworks to prevent fragmentation in global AI oversight.
  • Aims to develop shared evaluation methodologies and standards for assessing frontier AI model risks.
  • Represents a formal multilateral structure for operationalizing international AI safety commitments made at summits like Bletchley Park.
  • Hosted on OECD.AI, signaling alignment with broader intergovernmental AI governance infrastructure.

Review

The source document provides a comprehensive analysis of the newly formed AI Safety Institute (AISI) International Network, which represents a critical multilateral effort to address the global challenges of AI safety. The network's primary goal is to create a collaborative platform where national AI safety institutes can share knowledge, develop consistent standards, and collectively mitigate potential AI risks that transcend national boundaries. The document explores three potential organizational models for the network: a rotating secretariat, a static secretariat in a designated country, and a static secretariat hosted by an intergovernmental organization. Each model presents unique benefits and challenges, highlighting the complexity of establishing an effective international AI governance mechanism. The authors emphasize the importance of maintaining flexibility, inclusivity, and adaptability, while also recommending strategic partnerships with organizations like the UN and OECD to enhance the network's global reach and technical expertise.

Cited by 1 page

PageTypeQuality
AI Safety Institutes (AISIs)Policy69.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202619 KB
Safer together: How governments can enhance the AI Safety Institute Network’s role in global AI governance - OECD.AI 
 
 
 
 
 
 
 AI Risk & Accountability 

 AI has risks and all actors must be accountable. 

 AI, Data & Privacy 

 Data and privacy are primary policy issues for AI. 

 Generative AI 

 Managing the risks and benefits of generative AI. 

 Future of Work 

 How AI can and will affect workers and working environments 

 AI Index 

 The OECD AI will be a synthetic measurement framework on Trustworthy Artificial Intelligence (AI) 

 AI Incidents 

 To manage risks, governments must track and understand AI incidents and hazards. 

 AI in Government 

 Governments are not only AI regulators and investors, but also developers and users. 

 Data Governance 

 Expertise on data governance to promote its safe and faire use in AI 

 Responsible AI 

 The responsible development, use and governance of human-centred AI systems 

 Innovation & Commercialisation 

 How to drive cooperation on AI and transfer research results into products 

 AI Compute and the Environment 

 AI computing capacities and their environmental impact. 

 AI & Health 

 AI can help health systems overcome their most urgent challenges. 

 AI Futures 

 AI’s potential futures.

 WIPS 

 Programme on Work, Innovation, Productivity and Skills in AI. 

 Catalogue Tools & Metrics 

 Explore tools & metrics to build and deploy AI systems that are trustworthy. 

 AIM: AI Incidents and Hazards Monitor 

 Gain valuable insights on global AI incidents and hazards. 

 The Hiroshima AI Reporting Framework 

 Organisations developing advanced AI systems can participate by submitting a report. By sharing information, they will facilitate transparency and comparability of risk mitigation measures. 

 OECD AI Principles 

 The first IGO standard to promote innovative and trustworthy AI 

 Policy areas 

 Browse OECD work related to AI across policy areas. 

 Papers & Publications 

 OECD and GPAI publications on AI, including the OECD AI Papers Series. 

 Videos 

 Watch videos about AI policy the issues that matter most. 

 Context 

 AI is already a crucial part of most people’s daily routines. 

 About OECD.AI 

 OECD.AI is an online interactive platform dedicated to promoting trustworthy, human-centric AI. 

 About GPAI 

 The GPAI initiative and OECD member countries’ work on AI joined forces under the GPAI brand to create an integrated partnership. 

 Community of Experts 

 Experts from around the world advise GPAI and contribute to its work. 

 Partners 

 OECD.AI works closely with many partners. 

 Non Governmental Organisation Safer together: How governments can enhance the AI Safety Institute Network’s role in global AI governance 

 Frank Ryan , George Gor , Niki Iliadis 

 November 18, 2024 — 8 min read 

 
 
 
 
 
 

 

 As we integrate AI into every facet of society—from healthcare to national security—it is more than a technical challenge to ensure that the technology

... (truncated, 19 KB total)
Resource ID: 94926d25ba8555ea | Stable ID: sid_9bBmDgif34