Skip to content
Longterm Wiki
Back

Long Timelines to Advanced AI Have Changed My Mind

blog

Credibility Rating

2/5
Mixed(2)

Mixed quality. Some useful content but inconsistent editorial standards. Claims should be verified.

Rating inherited from publication venue: Substack

Written by Helen Toner (CSET, former OpenAI board), this post is notable for its insider perspective on how expert consensus on AI timelines has shifted, making it relevant for governance and strategy discussions in the AI safety community.

Metadata

Importance: 62/100blog postcommentary

Summary

Helen Toner, former OpenAI board member, argues that AI timelines have compressed so dramatically that the old debate between 'short' (10-20 year) and 'long' timelines is now obsolete. With leading AI company heads forecasting AGI-like systems by 2026-2027, she contends the urgent challenge is no longer whether to prepare for advanced AI but how to govern and respond to its near-term arrival.

Key Points

  • What was once considered 'short timeline' thinking (10-20 years) now seems slow compared to current predictions of human-level AI within 1-5 years.
  • Scaling laws and reasoning models have been key drivers compressing expert forecasts for advanced AI capabilities.
  • Major AI lab leaders (e.g., at OpenAI, Anthropic) are publicly forecasting AGI-like systems arriving around 2026-2027.
  • The old framing of 'should we prepare for advanced AI?' is obsolete; the new question is how to govern imminent near-term risks.
  • This timeline shift has profound implications for AI governance, policy urgency, and the prioritization of safety research.

Cited by 1 page

PageTypeQuality
Short AI Timeline Policy ImplicationsAnalysis62.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202611 KB
"Long" timelines to advanced AI have gotten crazy short 
 
 
 
 
 

 

 

 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 

 
 
 
 
 

 

 
 
 

 

 

 

 

 
 

 
 

 

 

 

 
 Rising Tide 

 Subscribe Sign in "Long" timelines to advanced AI have gotten crazy short

 The prospect of reaching human-level AI in the 2030s should be jarring

 Helen Toner Apr 01, 2025 163 21 21 Share Welcome to Rising Tide! I’m publishing 3 posts this week to celebrate the launch of this Substack—this is post #1. New posts will be more intermittent after this week. Subscribe to get them straight in your inbox: 

 Subscribe Source 

 It used to be a bold claim, requiring strong evidence, to argue that we might see anything like human-level AI any time in the first half of the 21st century. This 2016 post , for instance, spends 8,500 words justifying the claim that there is a greater than 10% chance of advanced AI being developed by 2036. 

 (Arguments about timelines typically refer to “timelines to AGI,” but throughout this post I’ll mostly refer to “advanced AI” or “human-level AI” rather than “AGI.” In my view, “AGI” as a term of art tends to confuse more than it clarifies, since different experts use it in such different ways. 1 So the fact that “human-level AI” sounds vaguer than “AGI” is a feature, not a bug—it naturally invites reactions of “human-level at what?” and “how are we measuring that?” and “is this even a meaningful bar?” and so on, which I think are totally appropriate questions as long as they’re not used to deny the overall trend towards smarter and more capable systems.) 

 Back in the dark days before ChatGPT, proponents of “short timelines” argued there was a real chance that extremely advanced AI systems would be developed within our lifetimes—perhaps as soon as within 10 or 20 years. If so, the argument continued, then we should obviously start preparing—investing in AI safety research, building international consensus around what kinds of AI systems are too dangerous to build or deploy, beefing up the security of companies developing the most advanced systems so adversaries couldn’t steal them, and so on. These preparations could take years or decades, the argument went, so we should get to work right away.

 Opponents with “long timelines” would counter that, in fact, there was no evidence that AI was going to get very advanced any time soon (say, any time in the next 30 years). 2 We should thus ignore any concerns associated with advanced AI and focus instead on the here-and-now problems associated with much less sophisticated systems, such as bias, surveillance, and poor labor conditions. Depending on the disposition of the speaker, problems from AGI might be banished forever as “science fiction” or simply relegat

... (truncated, 11 KB total)
Resource ID: c60a74482c11b551 | Stable ID: sid_F9T5SV9cGz