Eli Lifland
Eli Lifland
Biographical profile of Eli Lifland, a top-ranked forecaster and AI safety researcher who co-authored the AI 2027 scenario forecast and co-founded the AI Futures Project. The page documents his forecasting track record, the AI Futures timelines model, and his contributions to AI safety discourse.
Quick Assessment
| Dimension | Assessment |
|---|---|
| Primary Focus | AGI forecasting, scenario planning, AI governance |
| Key Achievements | #1 RAND Forecasting Initiative all-time leaderboard; co-authored AI 2027 scenario forecast; co-lead of Samotsvety forecasting team |
| Current Roles | Co-founder and researcher at AI Futures Project; co-founder/advisor at Sage; guest fund manager at Long Term Future Fund |
| Educational Background | Computer science and economics degrees from University of Virginia |
| Notable Contributions | AI 2027 scenario forecast; AI Futures timelines model; top-ranked forecasting track record |
Key Links
| Source | Link |
|---|---|
| Official Website | elilifland.com |
Overview
Eli Lifland is a forecaster and AI safety researcher who ranks #1 on the RAND Forecasting Initiative all-time leaderboard. He co-leads the Samotsvety forecasting team, which placed first in the CSET-Foretell/INFER competition in 2020, 2021, and 2022.1 His work focuses on AGI timeline forecasting, scenario planning, and AI safety.
Lifland co-founded the AI Futures Project alongside Daniel Kokotajlo and Thomas Larsen, and co-authored AI 2027, a detailed scenario forecast exploring potential AGI development trajectories.23 The project, with contributions from Scott Alexander and Romeo Dean, provides a concrete scenario for how superhuman AI capabilities might emerge, including geopolitical tensions, technical breakthroughs, and alignment challenges.
Lifland also co-founded and advises Sage, an organization building interactive AI explainers and forecasting tools, and serves as a guest fund manager at the Long Term Future Fund.4 He previously worked on Elicit at Ought and co-created TextAttack, a Python framework for adversarial attacks in natural language processing.5
AI Futures Project and AI 2027
Lifland is a co-founder and researcher at the AI Futures Project, a 501(c)(3) organization focused on AGI forecasting, scenario planning, and policy engagement.6 The organization was co-founded with Daniel Kokotajlo (Executive Director, former OpenAI researcher) and Thomas Larsen (founder of the Center for AI Policy).7
The project's flagship output is AI 2027, a detailed scenario forecast released in April 2025 exploring how superintelligence might emerge.8 The scenario was co-authored with Scott Alexander (who primarily assisted with rewriting) and Romeo Dean (who contributed supplements on compute and security considerations).9
The AI 2027 forecast presents a concrete narrative of AI development including:
- Increasingly capable AI agents automating significant portions of AI research and development10
- Geopolitical tensions, particularly a US-China AI race, influencing safety decisions and deployment timelines11
- Alignment challenges, including exploration of safer model series using chain-of-thought reasoning to address failures12
- Economic impacts, including widespread job displacement13
The project received significant attention and has been discussed in venues including Lawfare Media, ControlAI, and a CEPR webinar.141516
AI Futures Timelines Model
The AI Futures Project maintains a quantitative timelines model that generates probability distributions for key AGI milestones such as Automated Coder (AC) and superintelligence (ASI). The model incorporates benchmark tracking, compute availability, algorithmic progress, and other inputs to produce forecasts that team members then adjust based on their individual judgment.17
Lifland's personal AGI timeline estimates have shifted as new evidence has emerged. His median TED-AI (a general intelligence milestone) forecast has followed this trajectory:18
- 2021: ~2060
- July 2022: ~2050
- January 2024: ~2038
- Mid-2024: ~2035
- December 2024: ~2032
- April 2025: ~2031
- July 2025: ~2033
- January 2026: ~2035
The AI Futures Project has emphasized that the AI 2027 scenario was never intended as a confident prediction that AGI would arrive in 2027, and that all team members maintain high uncertainty about when AGI and ASI will be built.19 The December 2025 model update predicted 3-5 years longer timelines to full coding automation compared to the April 2025 AI 2027 forecast, attributed primarily to more conservative modeling of pre-automation AI R&D speedups and recognition of potential data bottlenecks.20
Forecasting Track Record
Lifland ranks #1 on the RAND Forecasting Initiative (CSET-Foretell/INFER) all-time leaderboard.21 On GJOpen, his Brier score of 0.23 outperforms the median of 0.301 (ratio 0.76), and he secured 2nd place in the Metaculus Economist 2021 tournament and 1st in the Salk Tournament as of September 2022.22
As co-lead of the Samotsvety Forecasting team (approximately 15 forecasters), Lifland helped guide the team to first-place finishes in the INFER competition in 2020, 2021, and 2022.23 In 2020, Samotsvety placed 1st with a relative score of -0.912 compared to -0.062 for 2nd place. In 2021, they achieved 1st with a relative score of -3.259 compared to -0.889 for 2nd place. Samotsvety holds positions 1, 2, 3, and 4 in INFER's all-time ranking, with some members achieving Superforecaster status.24
The team has produced public forecasts on critical topics including AI existential risk and nuclear risk.25
Sage and AI Digest
Lifland co-founded Sage, an organization focused on building interactive AI explainers and forecasting tools.26 One of Sage's key projects is AI Digest, which received $550,000 from Coefficient Giving for its work, with an additional $550,000 for forecasting projects.27 The organization aims to make AI developments more accessible to broader audiences through interactive tools and clear explanations.
Role in the AI Safety Community
Lifland is active in the AI safety and alignment communities, particularly through LessWrong and the Effective Altruism Forum. He serves as a mentor in the MATS Program (focusing on Strategy & Forecasting, Policy & Governance streams).28 He has also been featured in the documentary "Making God," which explores AGI risks.29
Lifland has taken the Giving What We Can Pledge, committing to donate 10% of his lifetime income to effective charities.30
Criticisms and Controversies
Lifland's work, particularly the AI 2027 timelines model, has faced methodological criticism from community members. In a detailed critique posted to LessWrong, the EA Forum, and Substack, forecaster "titotal" described the model's fundamental structure as "highly questionable," with little empirical validation and poor justification for parameters like superexponential time horizon growth curves.31 Titotal argued that models need strong conceptual and empirical justifications before influencing major decisions, characterizing AI 2027 as resembling a "shoddy toy model stapled to a sci-fi short story" disguised as rigorous research.32
Critics have also raised concerns about philosophical overconfidence, warning that popularizing flawed models could lead people to make significant life decisions based on shaky forecasts.33 Others counter that inaction on short timelines could be costlier if the forecasts prove accurate.34
Lifland responded to these criticisms by acknowledging errors and reviewing titotal's critique for factual accuracy. He agreed to changes in the model write-up and paid $500 bounties to both titotal and another critic, Peter Johnson, for identifying issues.3536 The team released a detailed response explaining their reasoning more thoroughly, including their justification for the model's assumptions.37
Other criticisms include:
- Lack of skeptic engagement: Some community members felt AI 2027 did not sufficiently address skeptical frameworks or justify its models against competing views38
- Unverifiable predictions: Concerns that some predictions are difficult to validate empirically39
Lifland has been forthright about forecast misses and has regularly updated his timelines as new evidence emerges.40 No major personal controversies or ethical issues have been documented beyond these methodological debates.
Sources
Footnotes
-
Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027 — Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027 ↩
-
Eli Lifland Personal Website — Eli Lifland Personal Website ↩
-
Eli Lifland Google Scholar Profile — Eli Lifland Google Scholar Profile ↩
-
AI Futures Project About Page — AI Futures Project About Page ↩
-
AI Futures Project About Page — AI Futures Project About Page ↩
-
ControlAI Newsletter - Future of AI Special Edition — ControlAI Newsletter - Future of AI Special Edition ↩
-
Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027 — Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027 ↩
-
ControlAI Newsletter - Future of AI Special Edition — ControlAI Newsletter - Future of AI Special Edition ↩
-
CEPR Webinar - AI 2027 Scenario Forecast — CEPR Webinar - AI 2027 Scenario Forecast ↩
-
AI Futures Blog - Clarifying Timelines Forecasts — AI Futures Blog - Clarifying Timelines Forecasts ↩
-
Citation rc-9ca6 (data unavailable — rebuild with wiki-server access) ↩
-
AI Futures Blog - Clarifying Timelines Forecasts — AI Futures Blog - Clarifying Timelines Forecasts ↩
-
Marketing AI Institute - Moving Back AGI Timeline — Marketing AI Institute - Moving Back AGI Timeline ↩
-
EA Forum - Samotsvety's AI Risk Forecasts — EA Forum - Samotsvety's AI Risk Forecasts ↩
-
Eli Lifland Personal Website — Eli Lifland Personal Website ↩
-
Manifund - AI Digest Project — Manifund - AI Digest Project ↩
-
MATS Program - Eli Lifland Mentor Profile — MATS Program - Eli Lifland Mentor Profile ↩
-
EA Forum - Making God Documentary — EA Forum - Making God Documentary ↩
-
Eli Lifland Personal Website — Eli Lifland Personal Website ↩
-
LessWrong - Deep Critique of AI 2027 Timeline Models — LessWrong - Deep Critique of AI 2027 Timeline Models ↩
-
LessWrong - Deep Critique of AI 2027 Timeline Models — LessWrong - Deep Critique of AI 2027 Timeline Models ↩
-
EA Forum - Practical Value of Flawed Models — EA Forum - Practical Value of Flawed Models ↩
-
EA Forum - Practical Value of Flawed Models — EA Forum - Practical Value of Flawed Models ↩
-
AI Futures Notes Substack - Response to Titotal Critique — AI Futures Notes Substack - Response to Titotal Critique ↩
-
EA Forum - Practical Value of Flawed Models — EA Forum - Practical Value of Flawed Models ↩
-
AI Futures Notes Substack - Response to Titotal Critique — AI Futures Notes Substack - Response to Titotal Critique ↩
-
ControlAI Newsletter - Future of AI Special Edition — ControlAI Newsletter - Future of AI Special Edition ↩
-
AI Futures Blog - Clarifying Timelines Forecasts — AI Futures Blog - Clarifying Timelines Forecasts ↩
References
“Not to mince words, I think it’s pretty bad . It’s not just that I disagree with their parameter estimates, it’s that I think the fundamental structure of their model is highly questionable and at times barely justified, there is very little empirical validation of the model, and there are parts of the code that the write-up of the model straight up misrepresents.”
“All-things-considered forecasts: Our forecasts for what will happen in the world, including adjustments on top of the outputs of our timelines and takeoff models.”
The source does not describe the specific inputs of the model or how team members adjust the forecasts.
“We’ve done our best to make it clear that it has never been the case that we were confident AGI would arrive in 2027.”
“2018: 2070. Early 2020: 2050. Nov 2020: 2030. Aug 2021: 2029 . Early 2022: 2029. Dec 2022: 2027. Nov 2023: 2027. Jan 2024: 2027. Feb 2024: 2027. Jan 2025: 2027. Feb 2025: 2028. Apr 2025: 2028. Aug 2025: EOY 2029 (2030.0). Nov 2025: 2030. Jan 2026: Dec 2030 (2030.95).”
“OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.”
“The alignment plan OpenBrain follows the Leike & Sutskever (2023) playbook: now that they have a model capable of greatly speeding up alignment research (especially coding portions), they will use existing alignment techniques like deliberative alignment and weak-to-strong generalization to try to get it to internalize the Spec in the right way.”
“AI has started to take jobs, but has also created new ones.”