Skip to content
Longterm Wiki
Back

International AI Safety Report (October 2025)

web

This is an official interim update to the landmark International AI Safety Report, a major intergovernmental reference document on AI safety risks; highly relevant for those tracking global AI governance and the evolving capabilities landscape as of late 2025.

Metadata

Importance: 78/100organizational reportprimary source

Summary

A focused interim update to the International AI Safety Report, chaired by Yoshua Bengio, covering significant developments in AI capabilities and their risk implications between full annual editions. The report is produced by an international panel of experts from over 30 countries and aims to keep policymakers and researchers current on fast-moving AI developments. It serves as an authoritative, consensus-oriented reference for AI safety governance.

Key Points

  • Introduces 'Key Updates' as shorter, focused reports to bridge annual editions of the International AI Safety Report due to the rapid pace of AI development.
  • Chaired by Yoshua Bengio with senior advisers including Hinton, Russell, Acemoglu, Narayanan, and Schölkopf, representing broad international expertise.
  • Expert Advisory Panel spans representatives from 30+ countries and organizations including the UN, EU, and OECD, providing global governance relevance.
  • Focuses specifically on AI capabilities advances and their risk implications as of late 2025, targeting policymakers and researchers.
  • Deliberately avoids endorsing specific policy or regulatory approaches, aiming for technical objectivity and broad international applicability.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 202684 KB
First Key Update: Capabilities and Risk Implications | International AI Safety Report 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 

 

 
 
 

 
 

 

 
 
 
 

 
 
 Skip to main content
 

 
 
 

 

 
 

 
 

 
 

 
 
 
 Open Table of contents 

 
 
 
 
 

 
 
 

 
 
 
 Table of contents

 
 
 
 

 
 
 
 
 

 
 
 
 
 
 
 Contributors 

 
 
 
 
 
 
 
 
 
 
 
 

 
 
 Chair

 Prof. Yoshua Bengio, Université de Montréal / LawZero / Mila - Quebec AI Institute

 Expert Advisory Panel

 The Expert Advisory Panel is an international advisory body that advises the Chair on the content of the Report. The Expert Advisory Panel provided technical feedback only. The Report – and its Expert Advisory Panel – does not endorse any particular policy or regulatory approach.

 The Panel comprises representatives from over 30 countries and international organisations, including the United Nations (UN), European Union (EU), and the Organisation for Economic Co-operation and Development (OECD). Please find here – internationalaisafetyreport.org/expert-advisory-panel – the membership of the Expert Advisory Panel to the 2026 International AI Safety Report.

 Lead Writers

 Stephen Clare 

 Carina Prunkl 

 Writing Group

 Maksym Andriushchenko, Ellis Institute Tübingen

 Ben Bucknall, University of Oxford

 Philip Fox, KIRA Center

 Tiancheng Hu, University of Cambridge

 Cameron Jones, Stony Brook University

 Sam Manning, Centre for the Governance of AI

 Nestor Maslej, Stanford University

 Vasilios Mavroudis, The Alan Turing Institute

 Conor McGlynn, Harvard University

 Malcolm Murray, SaferAI

 Shalaleh Rismani, Mila - Quebec AI Institute

 Charlotte Stix, Apollo Research

 Lucia Velasco, Maastricht University

 Nicole Wheeler, Advanced Research and Invention Agency (ARIA)

 Daniel Privitera (Interim Lead), KIRA Center

 Sören Mindermann (Interim Lead), independent

 Senior Advisers

 Daron Acemoglu, Massachusetts Institute of Technology

 Thomas G. Dietterich, Oregon State University

 Fredrik Heintz, Linköping University

 Geoffrey Hinton, University of Toronto

 Nick Jennings, Loughborough University

 Susan Leavy, University College Dublin

 Teresa Ludermir, Federal University of Pernambuco

 Vidushi Marda, AI Collaborative

 Helen Margetts, University of Oxford

 John McDermid, University of York

 Jane Munga, Carnegie Endowment for International Peace

 Arvind Narayanan, Princeton University

 Alondra Nelson, Institute for Advanced Study

 Clara Neppel, IEEE

 Sarvapali D. (Gopal) Ramchurn, Responsible AI UK

 Stuart Russell, University of California, Berkeley

 Marietje Schaake, Stanford University

 Bernhard Schölkopf, ELLIS Institute Tübingen

 Alvaro Soto, Pontificia Universidad Católica de Chile

 Lee Tiedrich, University of Maryland/Duke

 Gaël Varoquaux, Inria

 Andrew Yao, Tsinghua University

 Ya-Qin Zhang, Tsinghua University

 Secretariat

 UK AI Security Institute: Lambrini Das, Claire Dennis, Arianna Dini, Freya Hempleman, Samuel Kenny, Patrick King

... (truncated, 84 KB total)
Resource ID: 6acf3be7a03c2328 | Stable ID: sid_jCM9HrUuEF