Longterm Wiki
Back

Research note: A simpler AI timelines model predicts 99% AI R&D automation in ~2032

blog

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
AI TimelinesConcept95.0

Cached Content Preview

HTTP 200Fetched Feb 26, 2026930 KB
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Research note: A simpler AI timelines model predicts 99% AI R&D automation in ~2032 — LessWrong AI World Modeling Frontpage 67 Research note: A simpler AI timelines model predicts 99% AI R&D automation in ~2032 by Thomas Kwa 12th Feb 2026 AI Alignment Forum Linkpost from metr.org 10 min read 15 67 Ω 27 In this post, I describe a simple model for forecasting when AI will automate AI development. It is based on the AI Futures model , but more understandable and robust, and has deliberately conservative assumptions. At current rates of compute growth and algorithmic progress, this model's median prediction is >99% automation of AI R&D in late 2032. Most simulations result in a 1000x to 10,000,000x increase in AI efficiency and 300x-3000x research output by 2035. I therefore suspect that existing trends in compute growth and automation will still produce extremely powerful AI on "medium" timelines, even if the full coding automation and superhuman research taste that drive the AIFM's "fast" timelines (superintelligence by ~mid-2031) don't happen. Why make this? The AI Futures Model (AIFM) has 33 parameters; this has 8. I previously summarized the AIFM on LessWrong and found it to be very complex. Its philosophy is to model AI takeoff in great detail, which I find admirable and somewhat necessary given the inherent complexity in the world. More complex models can be more accurate, but they can also be more sensitive to modeling assumptions, prone to overfitting, and harder to understand. AIFM is extremely sensitive to time horizon in a way I wouldn't endorse. In particular, the "doubling difficulty growth factor", which measures whether time horizon increases superexponentially, could change the date of automated coder from 2028 to 2049! I suspect that time horizon is too poorly defined to nail down this parameter, and rough estimates of more direct AI capability metrics like uplift can give much tighter confidence intervals. Scope and limitations First, this model doesn't treat research taste and software engineering as separate skills/tasks. As such, I see it as making predictions about timelines (time to Automated Coder or Superhuman AI Researcher), not takeoff (the subsequent time from SAR to ASI and beyond). The AIFM can model takeoff because it has a second phase where the SAR's superhuman research taste causes further AI R&D acceleration on top of coding automation. If superhuman research taste makes AI development orders of magnitude more efficient, takeoff could be faster than this model predicts. Second, this model, like AIFM, doesn't track effects on the broader economy that feed back into AI progress the way Epoch's GATE model does. Third, we deliberately make two conservative assumptions: No full automation: as AIs get more capable, they never automate 100% of AI R&D work,

... (truncated, 930 KB total)
Resource ID: d4b3b448428a5079 | Stable ID: MTI1N2E1MD