Skip to content
Longterm Wiki
Back

Short Timelines Aren't Obviously Higher Leverage

web

Published by Forethought.org, this piece is relevant for AI safety researchers and EA community members thinking about how timeline beliefs should (or shouldn't) influence career and prioritization decisions.

Metadata

Importance: 55/100blog postanalysis

Summary

This Forethought.org analysis challenges the common assumption that believing in short AI timelines should automatically translate into higher-leverage or more urgent career/resource allocation decisions. It argues that the relationship between timeline beliefs and optimal actions is more complex than often assumed, and that short-timeline framings may not straightforwardly dominate longer-timeline strategies.

Key Points

  • The assumption that short timelines imply higher personal leverage or urgency is not obviously correct and deserves scrutiny.
  • Different timeline beliefs do not map cleanly onto different action priorities—many interventions remain valuable across a wide range of timelines.
  • Career and resource allocation decisions should account for counterfactual impact and comparative advantage, not just timeline beliefs.
  • The argument challenges common heuristics in EA/AI safety communities about how timeline uncertainty should drive prioritization.
  • Short-timeline framings may inadvertently narrow the scope of considered interventions in ways that reduce overall expected impact.

Cited by 1 page

PageTypeQuality
Short AI Timeline Policy ImplicationsAnalysis62.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202640 KB
Short AI Timelines Aren’t Always Higher-Leverage Short Timelines Aren't Obviously Higher-Leverage

 William MacAskill Mia Taylor Citations

 Cite Citations

 PDF Contact 22nd January 2026 Short Timelines Aren't Obviously Higher-Leverage

 Summary Timelines scenarios and why they’re action-relevant Understanding leverage Takeover impact The default value of the future is higher on medium timelines than short or long timelines Shorter timelines allow for larger AI takeover risk reduction Whether short or medium timelines are highest leverage depends on the resources or skillset being deployed Trajectory impact The default probability of averting AI takeover is higher on medium and longer timelines It’s unclear whether feasible value increase is greater or lower on different timelines Conclusion Appendix: BOTEC estimating the default value of the future on different timelines This is a rough research note – we’re sharing it for feedback and to spark discussion. We’re less confident in its methods and conclusions. 
 Summary 

 Different strategies make sense if timelines to AGI are short than if they are long.  
 In deciding when to spend resources to make AI go better, we should consider both: 
 
 The probability of each AI timelines scenario. 

 The expected impact, given some strategy, conditional on that timelines scenario. 

 
 We’ll call the second component "leverage." In this note, we'll focus on estimating the differences in leverage between different timeline scenarios and leave the question of their relative likelihood aside. 
 People sometimes argue that very short timelines are higher leverage because: 
 
 They are more neglected. 

 AI takeover risk is higher given short timelines. 

 
 These are important points, but the argument misses some major countervailing considerations. Longer timelines: 
 
 Allow us to grow our resources more before the critical period. 

 Give us more time to improve our strategic and conceptual understanding. 

 
 There's a third consideration we think has been neglected: the expected value of the future conditional on reducing AI takeover risk under different timeline scenarios. Two factors pull in opposite directions here: 
 
 Longer timelines give society more time to navigate other challenges that come with the intelligence explosion, which increases the value of the future. 

 But longer timelines mean that authoritarian countries are likely to control more of the future, which decreases it. 

 
 The overall upshot depends on which problems you’re working on and what resources you’re allocating: 
 
 Efforts aimed at reducing AI takeover are probably the highest leverage on 2-10 year timelines. Direct work has the highest leverage on the shorter end of that range; funding on the longer end. 

 Efforts aimed at improving the value of the future conditional on avoiding AI takeover probably have the highest leverage on 10+ year timeline scenarios. 

 
 Timelines scenarios and why they’re

... (truncated, 40 KB total)
Resource ID: 9746c1ad09277672 | Stable ID: sid_yz9Ewuh4HL