Skip to content
Longterm Wiki
Back

Yoshua Bengio and others

paper

Authors

Yoshua Bengio·Geoffrey Hinton·Andrew Yao·Dawn Song·Pieter Abbeel·Trevor Darrell·Yuval Noah Harari·Ya-Qin Zhang·Lan Xue·Shai Shalev-Shwartz·Gillian Hadfield·Jeff Clune·Tegan Maharaj·Frank Hutter·Atılım Güneş Baydin·Sheila McIlraith·Qiqi Gao·Ashwin Acharya·David Krueger·Anca Dragan·Philip Torr·Stuart Russell·Daniel Kahneman·Jan Brauner·Sören Mindermann

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

A high-profile consensus paper co-authored by Turing Award winner Yoshua Bengio, synthesizing expert views on extreme AI risks and calling for urgent combined technical and governance responses; widely cited in AI safety and policy discussions.

Paper Details

Citations
81
1 influential
Year
2023

Metadata

Importance: 82/100arxiv preprintprimary source

Abstract

Artificial Intelligence (AI) is progressing rapidly, and companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI's impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems. Although researchers have warned of extreme risks from AI, there is a lack of consensus about how exactly such risks arise, and how to manage them. Society's response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts. AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems. In this short consensus paper, we describe extreme risks from upcoming, advanced AI systems. Drawing on lessons learned from other safety-critical technologies, we then outline a comprehensive plan combining technical research and development with proactive, adaptive governance mechanisms for a more commensurate preparation.

Summary

This consensus paper by Yoshua Bengio and colleagues argues that advancing AI systems pose extreme risks—including large-scale social harms, malicious misuse, and irreversible loss of human control—that current safety research and governance mechanisms are inadequate to address. The authors propose a comprehensive response combining technical AI safety research with proactive, adaptive governance frameworks, drawing on lessons from other safety-critical technologies.

Key Points

  • Rapid AI progress toward generalist, autonomous systems may soon massively amplify both benefits and risks, including irreversible loss of human control.
  • Current AI safety research is lagging behind capabilities development, and governance initiatives lack mechanisms to prevent misuse or address autonomous systems.
  • Society's response is incommensurate with the scale and speed of anticipated AI progress, as warned by many experts including this group of researchers.
  • The paper draws on safety-critical technology precedents (e.g., nuclear, aviation) to inform a more robust AI risk management framework.
  • A combined approach of technical safety R&D and proactive, adaptive governance institutions is proposed as a minimum adequate response.

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 202634 KB
[2310.17688] Managing AI Risks in an Era of Rapid Progress 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 Managing AI Risks
 in an Era of Rapid Progress

 
 
 
 Yoshua Bengio
& Mila - Quebec AI Institute, Université de Montréal, Canada CIFAR AI Chair
 
 Geoffrey Hinton
 University of Toronto, Vector Institute
 
 Andrew Yao
 Tsinghua University
 
 Dawn Song
 UC Berkeley
 
 Pieter Abbeel
 UC Berkeley
 
 Yuval Noah Harari
 The Hebrew University of Jerusalem, Department of History
 
 Ya-Qin Zhang
 Tsinghua University
 
 Lan Xue
 Tsinghua University, Institute for AI International Governance
 
 Shai Shalev-Shwartz
 The Hebrew University of Jerusalem
 
 Gillian Hadfield
 University of Toronto, SR Institute for Technology and Society, Vector Institute
 
 Jeff Clune
 University of British Columbia, Canada CIFAR AI Chair, Vector Institute
 
 Tegan Maharaj
 University of Toronto, Vector Institute
 
 Frank Hutter
 University of Freiburg
 
 Atılım Güneş Baydin
 University of Oxford
 
 Sheila McIlraith
 University of Toronto, Vector Institute
 
 Qiqi Gao
 East China University of Political Science and Law
 
 Ashwin Acharya
 Institute for AI Policy and Strategy
 
 David Krueger
 University of Cambridge
 
 Anca Dragan
 UC Berkeley
 
 Philip Torr
 University of Oxford
 
 Stuart Russell
 UC Berkeley
 
 Daniel Kahneman
 Princeton University, School of Public and International Affairs
 
 Jan Brauner*
 University of Oxford
 
 Sören Mindermann*
 University of Oxford, Mila - Quebec AI Institute, Université de Montréal
 
 
 
 

 
 Abstract

 
                                                            Abstract

 
 In this short consensus paper, we outline risks from upcoming, advanced AI systems. We examine large-scale social harms and malicious uses, as well as an irreversible loss of human control over autonomous AI systems. In light of rapid and continuing AI progress, we propose urgent priorities for AI R&D and governance.

 
 
 
 
 Rapid AI progress

 
 In 2019, GPT-2 could not reliably count to ten. Only four years later, deep learning systems can write software, generate photorealistic scenes on demand, advise on intellectual topics, and combine language and image processing to steer robots. As AI developers scale these systems, unforeseen abilities and behaviors emerge spontaneously, without explicit programming
 [ 1 ] . Progress in AI has been swift and, to many, surprising.

 
 
 The pace of progress may surprise us again. Current deep learning systems still lack important capabilities and we do not know how long it will take to develop them. However, companies are engaged in a race to create generalist AI systems that match or exceed human abilities in most cognitive work
 [ 2 , 3 ] .

 
 
 They are rapidly deploying more resources and developing new techniques to increase AI capabilities. Progress in AI also enables faster progress: AI assistants are increasingly used to automate programming
 [ 4 ] , data collection
 [ 5 , 6 ] , and chip design
 [ 7 ] to impro

... (truncated, 34 KB total)
Resource ID: abf8888683dbf163 | Stable ID: ZmJiNGU3ZG