Back
A deep critique of AI 2027's bad timeline models
blogData Status
Not fetched
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Timelines | Concept | 95.0 |
| Eli Lifland | Person | 58.0 |
Cached Content Preview
HTTP 200Fetched Feb 26, 20261162 KB
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. A deep critique of AI 2027’s bad timeline models — LessWrong AI Timelines Has Diagram Simulation AI Frontpage 2025 Top Fifty: 18 % 375 A deep critique of AI 2027’s bad timeline models by titotal 19th Jun 2025 Linkpost from titotal.substack.com 47 min read 40 375 Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article. Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand. Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies. What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it . The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves : It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage ; 166,000 watched our Dwarkesh interview . We were invited on something like a million podcasts. Team members gave talks at Harvard, the Federation of American Scientists, and OpenAI. Now, I was originally happy to dismiss this work and just wait for their predictions to fail, but this thing just keeps spreading, including a youtube video with millions of views . So I decided to actually dig into the model and the code, and try to understand what the authors were saying and what evidence they were using to back it up. The article is huge, so I focussed on one section alone: their “timelines forecast” code and accompanyin
... (truncated, 1162 KB total)Resource ID:
391c16ea677aad6d | Stable ID: NDc1YmE0Mz