Longterm Wiki
Back

Roman Yampolskiy

paper

Author

Severin Field

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Data Status

Not fetched

Abstract

The development of artificial general intelligence (AGI) is likely to be one of humanity's most consequential technological advancements. Leading AI labs and scientists have called for the global prioritization of AI safety citing existential risks comparable to nuclear war. However, research on catastrophic risks and AI alignment is often met with skepticism, even by experts. Furthermore, online debate over the existential risk of AI has begun to turn tribal (e.g. name-calling such as "doomer" or "accelerationist"). Until now, no systematic study has explored the patterns of belief and the levels of familiarity with AI safety concepts among experts. I surveyed 111 AI experts on their familiarity with AI safety concepts, key objections to AI safety, and reactions to safety arguments. My findings reveal that AI experts cluster into two viewpoints -- an "AI as controllable tool" and an "AI as uncontrollable agent" perspective -- diverging in beliefs toward the importance of AI safety. While most experts (78%) agreed or strongly agreed that "technical AI researchers should be concerned about catastrophic risks", many were unfamiliar with specific AI safety concepts. For example, only 21% of surveyed experts had heard of "instrumental convergence," a fundamental concept in AI safety predicting that advanced AI systems will tend to pursue common sub-goals (such as self-preservation). The least concerned participants were the least familiar with concepts like this, suggesting that effective communication of AI safety should begin with establishing clear conceptual foundations in the field.

Cited by 4 pages

Cached Content Preview

HTTP 200Fetched Feb 23, 202648 KB
Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts 
 
 
 
 
 
 

 
 

 
 
 
 
 Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts

 
 
 \fnm Severin \sur Field
 
 severin.field@louisville.edu 
 
 Cambridge ERA:AI Fellowship, Cambridge, UK
 
 
 
 Abstract

 The development of artificial general intelligence 1 1 1 The survey defines AGI as “AI systems
that are better at STEM research than the best human scientists, in addition to potentially having other advanced capabilities.” (AGI) is likely to be one of humanity’s most consequential technological advancements. Leading AI labs and scientists have called for the global prioritization of AI safety [ 1 ] citing existential risks comparable to nuclear war. However, research on catastrophic risks and AI alignment is often met with skepticism, even by experts. Furthermore, online debate over the existential risk of AI has begun to turn tribal (e.g. name-calling such as “doomer” or “accelerationist”). Until now, no systematic study has explored the patterns of belief and the levels of familiarity with AI safety concepts among experts. I surveyed 111 AI experts on their familiarity with AI safety concepts, key objections to AI safety, and reactions to safety arguments. My findings reveal that AI experts cluster into two viewpoints – an “AI as controllable tool” and an “AI as uncontrollable agent” perspective – diverging in beliefs toward the importance of AI safety. While most experts (78%) agreed or strongly agreed that “technical AI researchers should be concerned about catastrophic risks”, many were unfamiliar with specific AI safety concepts. For example, only 21% of surveyed experts had heard of “instrumental convergence,” a fundamental concept in AI safety predicting that advanced AI systems will tend to pursue common sub-goals (such as self-preservation). The least concerned participants were the least familiar with concepts like this, suggesting that effective communication of AI safety should begin with establishing clear conceptual foundations in the field.

 
 
 keywords: 

AI Safety, Surveying Experts, p(doom), Existential Risk
 
 
 
 1 Introduction

 
 Since the foundation of modern computer science, scientists such as Turing [ 2 ] have explored the possibility of achieving human-like intelligence. Over the past few decades, researchers have built a substantial body of work examining the risks posed by AI systems, an area of study termed “AI safety.”

 
 
 Today, many prominent AI researchers including Nobel Laureate Geoffrey Hinton and Turing Laureate Yoshua Bengio argue that intelligent machines could endanger human civilization [ 3 ] . In May of 2023, many of the most notable scientists and figures in AI signed a statement stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” [ 1 ] .

 
 
 Prominent AI researchers hold dramatically dif

... (truncated, 48 KB total)
Resource ID: 4e7f0e37bace9678 | Stable ID: MWU4NGI2ND