Skip to content
Longterm Wiki
Back

Credibility Rating

2/5
Mixed(2)

Mixed quality. Some useful content but inconsistent editorial standards. Claims should be verified.

Rating inherited from publication venue: Substack

Written by Benjamin Todd (80,000 Hours founder), this piece engages with one of the most prominent 2024 documents arguing for near-term AGI, making it useful context for understanding current debates about timelines and their strategic implications.

Metadata

Importance: 55/100blog postcommentary

Summary

Benjamin Todd reviews Leopold Aschenbrenner's 'Situational Awareness' essay series, analyzing its claims about accelerating AGI timelines, the plausibility of rapid capability gains, and implications for AI safety and strategy. The review assesses the evidence and reasoning behind Aschenbrenner's bullish timeline predictions and their significance for the AI safety community.

Key Points

  • Engages critically with Aschenbrenner's 'Situational Awareness' essays which predict rapid progression to AGI and superintelligence within this decade.
  • Evaluates the key arguments for compressed timelines, including scaling laws, algorithmic progress, and anticipated compute growth.
  • Considers strategic and policy implications if Aschenbrenner's timeline predictions are correct or approximately correct.
  • Discusses how shorter timelines affect prioritization decisions for people working on AI safety and governance.
  • Provides an 80,000 Hours perspective on how to respond to high-uncertainty but potentially high-stakes timeline forecasts.

Cited by 1 page

PageTypeQuality
The Case For AI Existential RiskArgument66.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202613 KB
Shortening AGI timelines: a review of expert forecasts 
 
 
 
 
 

 

 

 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 

 
 
 
 
 

 

 
 
 

 

 

 

 

 
 

 
 

 

 

 

 
 Benjamin Todd 

 Subscribe Sign in Shortening AGI timelines: a review of expert forecasts

 Benjamin Todd Apr 09, 2025 23 10 2 Share As a non-expert, it would be great if there were experts who could tell us when we should expect artificial general intelligence (AGI) to arrive.

 Unfortunately, there aren’t.

 There are only different groups of experts with different weaknesses.

 This article is an overview of what five different types of experts say about when we’ll reach AGI, and what we can learn from them (that feeds into my full article on forecasting AI ). 

 In short:

 Every group shortened their estimates in recent years.

 AGI before 2030 seems within the range of expert opinion, even if many disagree.

 None of the forecasts seem especially reliable, so they neither rule in nor rule out AGI arriving soon.

 In four years, the mean estimate on Metaculus for when AGI will be developed has plummeted from 50 years to 5. There are problems with the definition used, but the graph reflects a broader pattern of declining estimates. Here’s an overview of the five groups:

 AI experts 

 1. Leaders of AI companies 

 The leaders of AI companies are saying that AGI arrives in 2–5 years , and appear to have recently shortened their estimates. 

 This is easy to dismiss. This group is obviously selected to be bullish on AI and wants to hype their own work and raise funding.

 However, I don’t think their views should be totally discounted. They’re the people with the most visibility into the capabilities of next-generation systems, and the most knowledge of the technology.

 And they’ve also been among the most right about recent progress, even if they’ve been too optimistic.

 Most likely, progress will be slower than they expect, but maybe only by a few years.

 2. AI researchers in general 

 One way to reduce selection effects is to look at a wider group of AI researchers than those working on AGI directly, including in academia. This is what Katja Grace did with a survey of thousands of recent AI publication authors . 

 The survey asked for forecasts of “high-level machine intelligence,” defined as when AI can accomplish every task better or more cheaply than humans. The median estimate was a 25% chance in the early 2030s and 50% by 2047 — with some giving answers in the next few years and others hundreds of years in the future.

 The median estimate of the chance of an AI being able to do the job of an AI researcher by 2033 was 5%. 1 

 They were also asked about when they expected AI could perform a list of specific tasks (2023 survey resul

... (truncated, 13 KB total)
Resource ID: 9b2e0ac4349f335e | Stable ID: sid_ZnzncVpjIl