Back
AI Impacts: Surveys of AI Risk Experts
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: AI Impacts
A useful reference page for empirical data on what AI experts believe about risk and timelines; helpful for grounding discussions in surveyed expert opinion rather than anecdote.
Metadata
Importance: 62/100wiki pagereference
Summary
This AI Impacts wiki page compiles and summarizes surveys conducted among AI researchers and experts regarding their views on AI risk, timelines to transformative AI, and safety concerns. It serves as a reference for empirical data on expert opinion within the AI safety community, tracking how professional assessments of AI risk have evolved over time.
Key Points
- •Aggregates multiple surveys of AI researchers on topics including timelines to AGI, probability of catastrophic outcomes, and prioritization of safety research.
- •Provides a resource for tracking shifts in expert consensus on AI risk over time, useful for calibrating community-wide beliefs.
- •Surveys include responses from machine learning researchers and AI safety specialists, offering both mainstream and safety-focused perspectives.
- •Data can inform policy discussions and resource allocation decisions by grounding them in empirical expert opinion rather than speculation.
- •Part of AI Impacts' broader mission to provide rigorous, evidence-based information relevant to understanding AI's long-term trajectory.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Accident Risk Cruxes | Crux | 67.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202611 KB
Surveys of experts on levels of AI Risk [AI Impacts Wiki]
skip to content
AI Impacts Wiki
User Tools
Log In
Site Tools
Search
Tools Show pagesource Old revisions Backlinks Recent Changes Media Manager Sitemap Log In >
Recent Changes
Media Manager
Sitemap
You are here: Welcome to the AI Impacts Wiki! » uncategorized » Surveys of experts on levels of AI Risk
uncategorized:ai_risk_surveys
Table of Contents
Surveys of experts on levels of AI Risk
Surveys of AI experts
2016 Expert Survey on Progress in AI
Zhang et al 2019
2022 Expert Survey on Progress in AI
Michael et al 2022
Generation Lab 2023
Expert Survey on Progress in AI 2023
Not currently included on this list
Surveys of AI safety/governance experts
Not currently included on this list
Other
Surveys of experts on levels of AI Risk
Published 9 May 2023; last updated 23 May 2023
This page is being updated, and may be low quality.
We know of six surveys of AI experts and two surveys of AI safety/governance experts on risks from advanced AI.
Surveys of AI experts
2016 Expert Survey on Progress in AI
(Main article: 2016 Expert Survey on Progress in AI )
Paper: When Will AI Exceed Human Performance? Evidence from AI Experts (Grace et al. 2016, published 2018)
“Say we have ' high-level machine intelligence ' when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption. Assume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%:”
“Extremely good (e.g. rapid growth in human flourishing)”: median 20%
“On balance good”: median 25%
“More or less neutral”: median 20%
“On balance bad”: median 10%
“Extremely bad (e.g. human extinction)”: median 5%
40% of responses had at least 10% on “extremely bad”
“Stuart Russell's argument”
Respondents were presented with an excerpt from a piece by Stuart Russell, then asked “Do you think this argument points at an important problem?”
11%: “No, not a real problem”
19%: “No, not an important problem”
31%: “Yes, a moderately important problem”
34%: “Yes, an important problem”
5%: “Yes, among the most important problems in the field”
AI safety research
Respondents were presented with a definition of “AI safety research,” then asked “How much should society prioritize AI safety research , relative to how much it is currently prioritized?”
... (truncated, 11 KB total)Resource ID:
e4357694019bb5f5 | Stable ID: OWMxZGI3Zm