Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: European Union

This European Commission news item documents an early milestone in international AI safety governance infrastructure, relevant to those tracking how governments are coordinating on frontier AI risk evaluation and oversight.

Metadata

Importance: 55/100news articlenews

Summary

This page covers the inaugural meeting of the International Network of AI Safety Institutes, a multilateral initiative bringing together national AI safety bodies to coordinate on evaluation methodologies, information sharing, and global AI safety governance. The network represents a significant step toward international coordination on frontier AI risk assessment.

Key Points

  • Marks the first formal gathering of national AI Safety Institutes under an international coordination framework
  • Aims to align evaluation methodologies and share findings across jurisdictions to avoid duplication of effort
  • Reflects growing governmental consensus that AI safety oversight requires cross-border collaboration
  • European Commission's digital strategy office is involved, signaling EU engagement with global AI safety governance
  • Builds on momentum from AI Safety Summits (Bletchley, Seoul) to institutionalize international AI safety cooperation

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 20264 KB
First meeting of the International Network of AI Safety Institutes | Shaping Europe’s digital future 
 
 
 
 
 
 
 

 
 
 
 Skip to main content 

 

 
 

 
 
 
 

 
 

 First meeting of the International Network of AI Safety Institutes 

 NEWS ARTICLE
 Publication 20 November 2024
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 AI safety institutes launched the International Network of AI Safety Institutes in San Francisco. The mission statement reflects their goals to advance AI safety, research, testing, and guidance.

 
 
 
 
 

 

 

 
 
 
 AdobeStock © Supatman
 
 
 
 
 
 
 
 On 20 and 21 November 2024, AI safety institutes and government-mandated offices from Australia, Canada, the European Commission, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States are convening in San Francisco for the first meeting of the International Network of AI Safety Institutes.

 Building on the Seoul Statement of Intent toward International Cooperation on AI Safety Science, released during the AI Seoul Summit on 21 May 2024, this initiative marks the beginning of a new phase of international collaboration on AI safety.

 The Network brings together technical organisations dedicated to advancing AI safety, helping governments and societies better understand the risks posed by advanced AI systems, and proposing solutions to mitigate these risks. The Network members also stress in their Mission Statement that “international cooperation to promote AI safety, security, inclusivity, and trust is vital to addressing these risks, driving responsible innovation, and expanding access to the benefits of AI worldwide.”

 Beyond addressing potential harms, the institutes and offices involved will guide the responsible development and deployment of AI systems. 

 
 
 Goals and priorities of the network

 
 
 The International Network of AI Safety Institutes will serve as a forum for collaboration, bringing together technical expertise to address AI safety risks and best practices. Recognising the importance of cultural and linguistic diversity, the Network will work towards a unified understanding of AI safety risks and mitigation strategies. 

 
 
 It will focus on four priority areas: 

 
 
 
 Research : Collaborating with the scientific community to advance research on the risks and capabilities of advanced AI systems, while sharing key findings to strengthen the science of AI safety.

 Testing : Developing and sharing best practices for testing advanced AI systems, including conducting joint testing exercises and exchanging insights from domestic evaluations, as appropriate.

 Guidance : Facilitating shared approaches to interpreting test results for advanced AI systems to ensure consistent and effective responses.

 Inclusion : Engaging partners and stakeholders in regions at all stages of development, by sharing information and technical tools in accessible ways to broaden participation in AI safety science. 

 


... (truncated, 4 KB total)
Resource ID: d73b249449782a66 | Stable ID: sid_YBqbhS9T1r