Skip to content
Longterm Wiki
Back

The International Scientific Report on the Safety of Advanced AI - Yoshua Bengio

web

Metadata

Cited by 1 page

PageTypeQuality
Pause / MoratoriumConcept72.0

Cached Content Preview

HTTP 200Fetched Apr 16, 20265 KB
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.

 
 
 Set cookies 
 Refuse cookies 
 Accept cookies 
 
 

 

 

 

 
 

 
 
 
 

 

 
 Setting cookies

 Multimedia cookies has been deactivated. Do you accept the use of cookies to display and allow you to watch the video content?

 
 
 Essential cookies 
 These cookies are necessary for the operation of the site and cannot be deactivated. (Still active) 
 
 
 
 Toggle 
 
 

 
 
 Analytics cookies 
 Do you accept the use of cookies to measure the audience of our sites? 
 
 
 
 Toggle 
 
 

 
 
 Multimedia Player 
 Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)? 
 
 
 
 Toggle 
 
 

 

 

 
 Save 
 
 
 

 

 

 

 

 

 
 
 
 
 

 

 

 
 
 
 
 
 
 In November 2023, I’ve been given the responsibility to chair the International Scientific Report on the Safety of Advanced AI as part of an international mandate following up from a resolution taken by 30 countries, as well as representatives from the EU and the UN, at the UK AI Safety Summit in Bletchley Park. It synthesizes scientific evidence and arguments regarding the safety of current and anticipated general-purpose AI systems. Inspired by the International Panel on Climate Change (IPCC), it brings together a panel of representative experts from 30 nations, as well as representatives from the EU and the UN, and it studies the impact AI could have if governments and wider society fail to deepen their collaboration on AI safety. What we have completed now is an interim report and it was delivered at the Seoul AI Summit on May 22, 2024. The final version will be presented at the AI Action Summit in France next February. The interim report was written thanks to the contributions of over 70 experts, including the 32 experts nominated by the countries and 26 senior advisors with a wide diversity of views and backgrounds, as well as an amazing team of dedicated collaborators and an incredibly well organized, agile and thoughtful secretariat. This document focuses on general-purpose AI that has a wide variety of applications and takes a broad view on AI safety, covering issues ranging from biases and toxic output, threats to democratic processes (disinformation) as well as national security, economic destabilization, environmental and loss of control concerns. It synthesizes the scientific literature (with 40 pages of citations) but, as requested, it does not make policy recommendations. It clarifies where uncertainty and disagreements exist within the scientific community, which should motivate more research to clear the fog about potentially significant future impacts.

 Some of the main takeaways include the following:

 General-purpose AI can be applied for great good if properl

... (truncated, 5 KB total)
Resource ID: a6c3266a3791d131 | Stable ID: sid_c6EBsTuB7w