A deep critique of AI 2027's bad timeline models
blogAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: LessWrong
A technical counterpoint to the influential AI 2027 forecasting project; particularly relevant for those evaluating the empirical basis of short-timeline AGI arguments and the reliability of quantitative AI forecasting models.
Forum Post Details
Metadata
Summary
A computational physicist conducts a detailed technical review of the AI 2027 project's forecast code and methodology, arguing that its model predicting a 2027 AI singularity has fundamental flaws including insufficient empirical validation, problematic parameter estimates, and discrepancies between written methodology and actual code. The critique challenges the project's viral credibility despite endorsements from prominent figures in the AI safety space.
Key Points
- •The AI 2027 project's timeline forecast models are argued to have flawed fundamental structure, with the 'superexponential' growth curve being poorly justified empirically.
- •Significant discrepancies exist between the project's written methodology and its actual code implementation, raising concerns about transparency and rigor.
- •Parameter estimates used in the models are critiqued as insufficiently grounded, potentially biasing results toward accelerated timelines.
- •The author identifies at least six alternative narratives consistent with the same data, undermining the project's framing of its forecast as uniquely supported.
- •Despite viral success and high-profile endorsements, the critique argues the modeling foundation does not support high confidence in a 2027 singularity scenario.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Timelines | Concept | 95.0 |
| Eli Lifland | Person | 58.0 |
Cached Content Preview
# A deep critique of AI 2027’s bad timeline models
By titotal
Published: 2025-06-19
*Thank you to Arepo and Eli Lifland for looking over this article for errors. *
*I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article. *
*Note that the majority of this article was written before *[*Eli’s updated model*](https://ai-2027.com/research/timelines-forecast#2025-may-7-update) *was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.*
**Introduction:**
-----------------
[AI 2027](https://ai-2027.com/) is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.
What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by [five appendices](https://ai-2027.com/research) of “detailed research supporting these predictions” and a codebase for simulations. They state that [“hundreds” of people reviewed](https://ai-2027.com/about) the text, including AI expert Yoshua Bengio, although some of these reviewers [only saw bits of it](https://garymarcus.substack.com/p/the-ai-2027-scenario-how-realistic).
The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a *plausible* year, and they back it up with images of sophisticated looking modelling like the following:

This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To [quote the authors themselves](https://blog.ai-futures.org/p/ai-2027-media-reactions-criticism):
*It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited*[* our webpage*](https://ai-2027.com/)*; 166,000 watched*[* our Dwarkesh interview*](https://www.youtube.com/watch?v=htOvH12T7mU)*. We were invited on something like a million podcasts. Team members gave talks at Harvard, the Federation of American Scientists, and OpenAI.*
Now, I was originally happy to dismiss this work and just wait for their predictions to fail, but this thing just kee
... (truncated, 80 KB total)391c16ea677aad6d | Stable ID: NDc1YmE0Mz