Longterm Wiki
Back

What's up with Anthropic predicting AGI by early 2027?

web

Data Status

Not fetched

Cited by 2 pages

PageTypeQuality
The Case For AI Existential RiskArgument66.0
AI TimelinesConcept95.0

Cached Content Preview

HTTP 200Fetched Feb 26, 2026378 KB
What's up with Anthropic predicting AGI by early 2027? Redwood Research blog Subscribe Sign in What's up with Anthropic predicting AGI by early 2027? I operationalize Anthropic's prediction of "powerful AI" and explain why I'm skeptical Ryan Greenblatt Nov 03, 2025 21 2 Share As far as I’m aware, Anthropic is the only AI company with official AGI timelines 1 : they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say: As our CEO Dario Amodei writes in ‘Machines of Loving Grace’, we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties: Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines—including biology, computer science, mathematics, and engineering. [...] They often describe this capability level as a “country of geniuses in a datacenter”. This prediction is repeated elsewhere and Jack Clark confirms that something like this remains Anthropic’s view (as of September 2025). Of course, just because this is Anthropic’s official prediction 2 doesn’t mean that all or even most employees at Anthropic share the same view. 3 However, I do think we can reasonably say that Dario Amodei, Jack Clark, and Anthropic itself are all making this prediction. 4 I think the creation of transformatively powerful AI systems—systems as capable or more capable than Anthropic’s notion of powerful AI—is plausible in 5 years and is more likely than not within 10 years. Correspondingly, I think society is massively underpreparing for the risks associated with such AI systems. However, I think Anthropic’s predictions are very unlikely to come true (using the operationalization of powerful AI that I give below, I think powerful AI by early 2027 is around 6% likely). I do think they should get some credit for making predictions at all (though I wish the predictions were more precise, better operationalized, and they also made intermediate predictions prior to powerful AI). In this post, I’ll try to more precisely operationalize Anthropic’s prediction so that it can be falsified or proven true, talk about what I think the timeline up through 2027 would need to look like for this prediction to be likely, and explain why I think the prediction is unlikely to come true. [Thanks to Ajeya Cotra, Ansh Radhakrishnan, Buck Shlegeris, Daniel Kokotajlo, Eli Lifland, James Bradbury, Lukas Finnveden, and Megan Kinniment for comments and/or discussion.] What does “powerful AI” mean? Anthropic has talked about what powerful AI means in a few different places. Pulling from the essay by Dario Amodei Machines of Loving Grace 5 : In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc. In a

... (truncated, 378 KB total)
Resource ID: 03c0d7873d860ee3 | Stable ID: NTFiYjhiOT