Longterm Wiki
Back

AI Impacts 2023 survey

paper

Authors

Katja Grace·Harlan Stewart·Julia Fabienne Sandkühler·Stephen Thomas·Ben Weinstein-Raun·Jan Brauner·Richard C. Korzekwa

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Data Status

Not fetched

Abstract

In the largest survey of its kind, 2,778 researchers who had published in top-tier artificial intelligence (AI) venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey). Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 11, 202698 KB
[2401.02843] Thousands of AI Authors on the Future of AI 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 Thousands of AI Authors on the Future of AI

 
 
 
Katja Grace 
 AI Impacts
 Berkeley, California 
 United States 
 katja@aiimpacts.org 
 \And Harlan Stewart † † footnotemark: 
 AI Impacts
 Berkeley, California
 United States
 \And Julia Fabienne Sandkühler † † footnotemark: 
 Department of Psychology
 University of Bonn
 Germany 
 \And Stephen Thomas † † footnotemark: 
 AI Impacts
 Berkeley, California
 United States
 \And Ben Weinstein-Raun 
 Independent 
 Berkeley, California 
 United States 
 \And Jan Brauner 
 Department of Computer Science 
 University of Oxford 
 United Kingdom 
 
 Corresponding authorEqual Contribution 
 
 (January 2024) 

 
 Abstract

 In the largest survey of its kind, we surveyed 2,778 researchers who had published in top-tier artificial intelligence (AI) venues, asking for their predictions on the pace of AI progress and the nature and impacts of advanced AI systems. The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022 ] . However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey).

 Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that “substantial” or “extreme” concern is warranted about six different AI-related scenarios, including spread of false information, authoritarian population control, and worsened inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.

 
 
 
 
 1 Introduction

 
 Artificial intelligence appears poised to reshape society. Decision-makers are working to address opportunities and threats due to AI in the private sector [OpenAI, 2023 ] , academia [Center for Human-compatible Artificial Intelligence, 202

... (truncated, 98 KB total)
Resource ID: 3f9927ec7945e4f2 | Stable ID: MWQ4NDM1Yz