Back
80,000 Hours AGI Timelines Review
webAuthor
Benjamin Todd
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: 80,000 Hours
Data Status
Full text fetchedFetched Dec 28, 2025
Summary
A comprehensive review of expert predictions on Artificial General Intelligence (AGI) from multiple groups, showing converging views that AGI could arrive before 2030. Different expert groups, including AI company leaders, researchers, and forecasters, show shortened and increasingly similar estimates.
Key Points
- •Expert AGI timelines have dramatically shortened, with many now predicting arrival before 2030
- •Different expert groups show converging but still uncertain predictions
- •No single forecast should be taken as definitive, but collective view suggests AGI is a realistic near-term possibility
Review
The source provides a nuanced overview of AGI timeline predictions from five different expert groups, revealing a striking trend of converging and dramatically shortened estimates. AI company leaders, researchers, and forecasting platforms like Metaculus have progressively reduced their AGI arrival predictions, with many now suggesting a potential timeline between 2026-2032. The analysis critically examines each group's strengths and limitations, highlighting potential biases such as selection effects, incentive structures, and varying levels of technological expertise. While no single group's forecast can be considered definitive, the collective view suggests that AGI is no longer a distant, purely speculative concept, but a near-term possibility that warrants serious consideration. The review emphasizes the importance of maintaining uncertainty while recognizing the significant potential for transformative AI development in the coming decade.
Cited by 7 pages
| Page | Type | Quality |
|---|---|---|
| The Case For AI Existential Risk | Argument | 66.0 |
| AGI Development | -- | 52.0 |
| AGI Timeline | Concept | 59.0 |
| Novel / Unknown Approaches | Capability | 53.0 |
| AI Risk Critical Uncertainties Model | Crux | 71.0 |
| Metaculus | Organization | 50.0 |
| Long-Timelines Technical Worldview | Concept | 91.0 |
Cached Content Preview
HTTP 200Fetched Feb 26, 2026121 KB
Shrinking AGI timelines: a review of expert forecasts | 80,000 Hours Search for: On this page: 1 AI experts 1.1 1. Leaders of AI companies 1.2 2. AI researchers in general 2 Expert forecasters 2.1 3. Metaculus 2.2 4. Superforecasters in 2022 (XPT survey) 2.3 5. Samotsvety in 2023 3 Summary of expert views on when AGI will arrive 4 Learn more As a non-expert, it would be great if there were experts who could tell us when we should expect artificial general intelligence (AGI) to arrive. Unfortunately, there aren’t. There are only different groups of experts with different weaknesses. This article is an overview of what five different types of experts say about when we’ll reach AGI, and what we can learn from them (that feeds into my full article on forecasting AI ). In short: Every group shortened their estimates in recent years. AGI before 2030 seems within the range of expert opinion, even if many disagree. None of the forecasts seem especially reliable, so they neither rule in nor rule out AGI arriving soon. Table of Contents 1 AI experts 1.1 1. Leaders of AI companies 1.2 2. AI researchers in general 2 Expert forecasters 2.1 3. Metaculus 2.2 4. Superforecasters in 2022 (XPT survey) 2.3 5. Samotsvety in 2023 3 Summary of expert views on when AGI will arrive 4 Learn more In four years, the mean estimate on Metaculus for when AGI will be developed has plummeted from 50 years to five years. There are problems with the definition used, but the graph reflects a broader pattern of declining estimates. Here’s an overview of the five groups: AI experts 1. Leaders of AI companies The leaders of AI companies are saying that AGI arrives in 2–5 years , and appear to have recently shortened their estimates. This is easy to dismiss. This group is obviously selected to be bullish on AI and wants to hype their own work and raise funding. However, I don’t think their views should be totally discounted. They’re the people with the most visibility into the capabilities of next-generation systems, and the most knowledge of the technology. And they’ve also been among the most right about recent progress, even if they’ve been too optimistic. Most likely, progress will be slower than they expect, but maybe only by a few years. 2. AI researchers in general One way to reduce selection effects is to look at a wider group of AI researchers than those working on AGI directly, including in academia. This is what Katja Grace did with a survey of thousands of recent AI publication authors . The survey asked for forecasts of “high-level machine intelligence,” defined as when AI can accomplish every task better or more cheaply than humans. The median estimate was a 25% chance in the early 2030s and 50% by 2047 — with some giving answers in the next few years and others hundreds of years in the future. The median estimate of the chance of an AI being able to do the job of an AI researcher by 2033 was 5%. 1 They were also aske
... (truncated, 121 KB total)Resource ID:
f2394e3212f072f5 | Stable ID: NjNiMTk2ZD