Skip to content
Longterm Wiki
Back

Authors

elifland·Misha_Yagudin

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

Published on the EA Forum, this post represents one of the more rigorous quantitative efforts to estimate AI catastrophic risk probabilities using structured forecasting methods, and is frequently cited in AI safety timeline and risk discussions.

Metadata

Importance: 68/100analysis

Summary

Samotsvety, a prominent forecasting group, presents their probabilistic estimates of AI-related catastrophic and existential risks across various time horizons. The forecasts cover topics such as transformative AI timelines, probability of AI-caused extinction or civilizational collapse, and intermediate risk milestones. This serves as a quantitative benchmark for AI safety discussions grounded in superforecasting methodology.

Key Points

  • Samotsvety is a highly regarded forecasting team known for strong calibration on geopolitical and scientific questions, lending credibility to their AI risk estimates.
  • The forecasts provide explicit numerical probabilities for AI-caused catastrophic outcomes, offering a rare quantitative reference point in AI safety discourse.
  • Estimates span multiple time horizons (e.g., by 2030, 2050, 2100), helping distinguish near-term from long-term risk concerns.
  • The group aggregates and reconciles views from multiple forecasters, reducing individual bias and providing a consensus-style probability distribution.
  • Results can be compared against other prominent forecasters (e.g., Metaculus community, individual researchers) to understand the range of expert opinion on AI risk.

Cached Content Preview

HTTP 200Fetched Apr 10, 202613 KB
# Samotsvety's AI risk forecasts
By elifland, Misha_Yagudin
Published: 2022-09-09
*Crossposted to* [*LessWrong*](https://www.lesswrong.com/posts/YMsD7GA7eTg2BafQd/samotsvety-s-ai-risk-forecasts) *and* [*Foxy Scout*](https://www.foxy-scout.com/samotsvetys-ai-risk-forecasts/)

Introduction
============

In [my review of What We Owe The Future](https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future%23Other_inputs&sa=D&source=editors&ust=1662684319699596&usg=AOvVaw2Fe56xeK-qGWHj-IgisATF) (WWOTF), I wrote:

> Finally, I’ve updated some based on my experience with [Samotsvety forecasters](https://www.google.com/url?q=https://samotsvety.org/&sa=D&source=editors&ust=1662684319700120&usg=AOvVaw1lVU67PXfb9AguucjUOYdf) when discussing AI risk… When we discussed the report on power-seeking AI, I expected tons of skepticism but in fact almost all forecasters seemed to give >=5% to disempowerment by power-seeking AI by 2070, with many giving >=10%.

In the comments, [Peter Wildeford asked](https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future?commentId%3DcB2FnhFRJujCpF6Dn%23comments&sa=D&source=editors&ust=1662684319700657&usg=AOvVaw0X3v5Z4WaVt_7DQMK0IlFh):

> It looks like Samotsvety also forecasted AI timelines and AI takeover risk - are you willing and able to provide those numbers as well?

We separately received a request from the [FTX Foundation](https://www.google.com/url?q=https://ftxfoundation.org/&sa=D&source=editors&ust=1662684319701284&usg=AOvVaw0NdHI6Y5L-x85GzlKDGpP-) to forecast on 3 questions about AGI timelines and risk.

I sent out surveys to get Samotsvety’s up-to-date views on all 5 of these questions, and thought it would be valuable to share the forecasts publicly.

A few of the headline aggregate forecasts are:

1.  25% chance of misaligned AI takeover by 2100, barring pre-[APS-AI](https://www.google.com/url?q=https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit%23heading%3Dh.14onymzb0y9&sa=D&source=editors&ust=1662684319702030&usg=AOvVaw1QjdoXC8eIWHc6RyVjtx5R) catastrophe
2.  81% chance of [Transformative AI](https://www.google.com/url?q=https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/&sa=D&source=editors&ust=1662684319702525&usg=AOvVaw2d-B8TknoAOYW-GzYqghrV) (TAI) by 2100, barring pre-TAI catastrophe
3.  32% chance of AGI being developed in the next 20 years

Forecasts
=========

In each case I aggregated forecasts by removing the single most extreme forecast on each end, then taking the [geometric mean of odds](https://forum.effectivealtruism.org/posts/sMjcjnnpoAQCcedL2/when-pooling-forecasts-use-the-geometric-mean-of-odds&sa=D&source=editors&ust=1662684319703178&usg=AOvVaw02P1mz7htYpyi9fQ6pjTmx).

To reduce concerns of in-group bias to some extent, I calculated a separate aggregate for those who weren’t highly-engaged EAs (HEAs) before joining Samotsvety. In mo

... (truncated, 13 KB total)
Resource ID: d6d72c7d3fed4844 | Stable ID: sid_hGP0W9HEgA