Back
AI 2027
webai-2027.com·ai-2027.com/
Data Status
Not fetched
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Timelines | Concept | 95.0 |
| Eli Lifland | Person | 58.0 |
Cached Content Preview
HTTP 200Fetched Feb 26, 20261856 KB
AI 2027 April 3rd 2025 PDF Listen Watch Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution. We wrote a scenario that represents our best guess about what that might look like. 1 It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes. 2 (Added Nov 22 2025, to prevent misunderstandings: we don't know exactly when AGI will be built. 2027 was our modal (most likely) year at the time of publication, our medians were somewhat longer . 3 For our latest forecasts, see here .) What is this? How did we write it? Why is it valuable? Who are we? The CEOs of OpenAI , Google DeepMind , and Anthropic have all predicted that AGI will arrive within the next 5 years. Sam Altman has said OpenAI is setting its sights on “superintelligence in the true sense of the word” and the “glorious future.” What might that look like? We wrote AI 2027 to answer that question. Claims about the future are often frustratingly vague, so we tried to be as concrete and quantitative as possible, even though this means depicting one of many possible futures. We wrote two endings: a “slowdown” and a “race” ending. However, AI 2027 is not a recommendation or exhortation. Our goal is predictive accuracy. 4 We encourage you to debate and counter this scenario. 5 We hope to spark a broad conversation about where we’re headed and how to steer toward positive futures. We’re planning to give out thousands in prizes to the best alternative scenarios. Our research on key questions (e.g. what goals will future AI agents have?) can be found here . The scenario itself was written iteratively: we wrote the first period (up to mid-2025), then the following period, etc. until we reached the ending. We then scrapped this and did it again. We weren’t trying to reach any particular ending. After we finished the first ending—which is now colored red—we wrote a new alternative branch because we wanted to also depict a more hopeful way things could end, starting from roughly the same premises. This went through several iterations. 6 Our scenario was informed by approximately 25 tabletop exercises and feedback from over 100 people, including dozens of experts in each of AI governance and AI technical work. “I highly recommend reading this scenario-type prediction on how AI could transform the world in just a few years. Nobody has a crystal ball, but this type of content can help notice important questions and illustrate the potential impact of emerging risks.” — Yoshua Bengio 7 We have set ourselves an impossible task. Trying to predict how superhuman AI in 2027 would go is like trying to predict how World War 3 in 2027 would go, except that it’s an even larger departure from past case studies. Yet it is still valuable to attempt, just as it is valuable for the U.S. military to game out Tai
... (truncated, 1856 KB total)Resource ID:
c2d8873d1af1970f | Stable ID: OTAxOTQwYz