Eli Lifland
Eli Lifland
Biographical profile of Eli Lifland, a top-ranked forecaster and AI safety researcher who co-authored the AI 2027 scenario forecast and co-founded the AI Futures Project. The page documents his forecasting track record, the AI Futures timelines model, and his contributions to AI safety discourse.
Quick Assessment
| Dimension | Assessment |
|---|---|
| Primary Focus | AGI forecasting, scenario planning, AI governance |
| Key Achievements | #1 RAND Forecasting Initiative all-time leaderboard; co-authored AI 2027 scenario forecast; co-lead of SamotsvetyOrganizationSamotsvetyElite forecasting group Samotsvety dominated INFER competitions 2020-2022 with relative Brier scores twice as good as competitors, providing influential probabilistic forecasts including 28% TAI by...Quality: 61/100 forecasting team |
| Current Roles | Co-founder and researcher at AI Futures ProjectOrganizationAI Futures ProjectAI Futures Project is a nonprofit co-founded in 2024 by Daniel Kokotajlo, Eli Lifland, and Thomas Larsen that produces detailed AI capability forecasts, most notably the AI 2027 scenario depicting ...Quality: 50/100; co-founder/advisor at Sage; guest fund manager at Long Term Future Fund |
| Educational Background | Computer science and economics degrees from University of Virginia |
| Notable Contributions | AI 2027 scenario forecast; AI Futures timelines model; top-ranked forecasting track record |
Key Links
| Source | Link |
|---|---|
| Official Website | elilifland.com |
Overview
Eli Lifland is a forecaster and AI safety researcher who ranks #1 on the RAND Forecasting Initiative all-time leaderboard. He co-leads the SamotsvetyOrganizationSamotsvetyElite forecasting group Samotsvety dominated INFER competitions 2020-2022 with relative Brier scores twice as good as competitors, providing influential probabilistic forecasts including 28% TAI by...Quality: 61/100 forecasting team, which placed first in the CSET-Foretell/INFER competition in 2020, 2021, and 2022.1 His work focuses on AGI timelineConceptAGI TimelineComprehensive synthesis of AGI timeline forecasts showing dramatic acceleration: expert median dropped from 2061 (2018) to 2047 (2023), Metaculus from 50 years to 5 years since 2020, with current p...Quality: 59/100 forecasting, scenario planning, and AI safety.
Lifland co-founded the AI Futures ProjectOrganizationAI Futures ProjectAI Futures Project is a nonprofit co-founded in 2024 by Daniel Kokotajlo, Eli Lifland, and Thomas Larsen that produces detailed AI capability forecasts, most notably the AI 2027 scenario depicting ...Quality: 50/100 alongside Daniel Kokotajlo and Thomas Larsen, and co-authored AI 2027, a detailed scenario forecast exploring potential AGI developmentE604Comprehensive synthesis of AGI timeline forecasts showing dramatic compression: Metaculus aggregates predict 25% probability by 2027 and 50% by 2031 (down from 50-year median in 2020), with industr...Quality: 52/100 trajectories.23 The project, with contributions from Scott Alexander and Romeo Dean, provides a concrete scenario for how superhuman AI capabilities might emerge, including geopolitical tensions, technical breakthroughs, and alignment challenges.
Lifland also co-founded and advises Sage, an organization building interactive AI explainers and forecasting tools, and serves as a guest fund manager at the Long Term Future Fund.4 He previously worked on ElicitOrganizationElicit (AI Research Tool)Elicit is an AI research assistant with 2M+ users that searches 138M papers and automates literature reviews, founded by AI alignment researchers from Ought and funded by Coefficient Giving (former...Quality: 63/100 at Ought and co-created TextAttack, a Python framework for adversarial attacks in natural language processing.5
AI Futures Project and AI 2027
Lifland is a co-founder and researcher at the AI Futures Project, a 501(c)(3) organization focused on AGI forecasting, scenario planning, and policy engagement.6 The organization was co-founded with Daniel Kokotajlo (Executive Director, former OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100 researcher) and Thomas Larsen (founder of the Center for AI Policy).7
The project's flagship output is AI 2027, a detailed scenario forecast released in April 2025 exploring how superintelligence might emerge.8 The scenario was co-authored with Scott Alexander (who primarily assisted with rewriting) and Romeo Dean (who contributed supplements on compute and security considerations).9
The AI 2027 forecast presents a concrete narrative of AI development including:
- Increasingly capable AI agents automating significant portions of AI research and development10
- Geopolitical tensions, particularly a US-China AI race, influencing safety decisions and deployment timelines11
- Alignment challenges, including exploration of safer model series using chain-of-thought reasoning to address failures12
- Economic impacts, including widespread job displacement13
The project received significant attention and has been discussed in venues including Lawfare Media, ControlAIOrganizationControlAIControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 100+ UK parliamentarians) while promoting direct in...Quality: 63/100, and a CEPR webinar.141516
AI Futures Timelines Model
The AI Futures Project maintains a quantitative timelines model that generates probability distributions for key AGI milestones such as Automated Coder (AC) and superintelligence (ASI). The model incorporates benchmark tracking, compute availability, algorithmic progress, and other inputs to produce forecasts that team members then adjust based on their individual judgment.17
Lifland's personal AGI timeline estimates have shifted as new evidence has emerged. His median TED-AI (a general intelligence milestone) forecast has followed this trajectory:18
- 2021: ~2060
- July 2022: ~2050
- January 2024: ~2038
- Mid-2024: ~2035
- December 2024: ~2032
- April 2025: ~2031
- July 2025: ~2033
- January 2026: ~2035
The AI Futures Project has emphasized that the AI 2027 scenario was never intended as a confident prediction that AGI would arrive in 2027, and that all team members maintain high uncertainty about when AGI and ASI will be built.19 The December 2025 model update predicted 3-5 years longer timelines to full coding automation compared to the April 2025 AI 2027 forecast, attributed primarily to more conservative modeling of pre-automation AI R&D speedups and recognition of potential data bottlenecks.20
Forecasting Track Record
Lifland ranks #1 on the RAND Forecasting Initiative (CSET-Foretell/INFER) all-time leaderboard.21 On GJOpen, his Brier score of 0.23 outperforms the median of 0.301 (ratio 0.76), and he secured 2nd place in the MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/100 Economist 2021 tournament and 1st in the Salk Tournament as of September 2022.22
As co-lead of the Samotsvety Forecasting team (approximately 15 forecasters), Lifland helped guide the team to first-place finishes in the INFER competition in 2020, 2021, and 2022.23 In 2020, Samotsvety placed 1st with a relative score of -0.912 compared to -0.062 for 2nd place. In 2021, they achieved 1st with a relative score of -3.259 compared to -0.889 for 2nd place. Samotsvety holds positions 1, 2, 3, and 4 in INFER's all-time ranking, with some members achieving Superforecaster status.24
The team has produced public forecasts on critical topics including AI existential risk and nuclear risk.25
Sage and AI Digest
Lifland co-founded Sage, an organization focused on building interactive AI explainers and forecasting tools.26 One of Sage's key projects is AI Digest, which received $550,000 from Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed \$4B+ in grants since 2014, including \$336M to AI safety (~60% of external funding). The organization spent ~\$50M on AI safety in 2024...Quality: 55/100 for its work, with an additional $550,000 for forecasting projects.27 The organization aims to make AI developments more accessible to broader audiences through interactive tools and clear explanations.
Role in the AI Safety Community
Lifland is active in the AI safety and alignment communities, particularly through LessWrongOrganizationLessWrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving \$5M+ in funding and serving as the origin point for ~31% of EA survey responden...Quality: 44/100 and the Effective Altruism Forum. He serves as a mentor in the MATS Program (focusing on Strategy & Forecasting, Policy & Governance streams).28 He has also been featured in the documentary "Making God," which explores AGI risks.29
Lifland has taken the Giving What We Can Pledge, committing to donate 10% of his lifetime income to effective charities.30
Criticisms and Controversies
Lifland's work, particularly the AI 2027 timelines model, has faced methodological criticism from community members. In a detailed critique posted to LessWrongOrganizationLessWrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving \$5M+ in funding and serving as the origin point for ~31% of EA survey responden...Quality: 44/100, the EA Forum, and Substack, forecaster "titotal" described the model's fundamental structure as "highly questionable," with little empirical validation and poor justification for parameters like superexponential time horizon growth curves.31 Titotal argued that models need strong conceptual and empirical justifications before influencing major decisions, characterizing AI 2027 as resembling a "shoddy toy model stapled to a sci-fi short story" disguised as rigorous research.32
Critics have also raised concerns about philosophical overconfidence, warning that popularizing flawed models could lead people to make significant life decisions based on shaky forecasts.33 Others counter that inaction on short timelines could be costlier if the forecasts prove accurate.34
Lifland responded to these criticisms by acknowledging errors and reviewing titotal's critique for factual accuracy. He agreed to changes in the model write-up and paid $500 bounties to both titotal and another critic, Peter Johnson, for identifying issues.3536 The team released a detailed response explaining their reasoning more thoroughly, including their justification for the model's assumptions.37
Other criticisms include:
- Lack of skeptic engagement: Some community members felt AI 2027 did not sufficiently address skeptical frameworks or justify its models against competing views38
- Unverifiable predictions: Concerns that some predictions are difficult to validate empirically39
Lifland has been forthright about forecast misses and has regularly updated his timelines as new evidence emerges.40 No major personal controversies or ethical issues have been documented beyond these methodological debates.
Sources
Footnotes
-
Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027 — Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027 ↩
-
Eli Lifland Personal Website — Eli Lifland Personal Website ↩
-
Eli Lifland Google Scholar Profile — Eli Lifland Google Scholar Profile ↩
-
AI Futures Project About Page — AI Futures Project About Page ↩
-
AI Futures Project About Page — AI Futures Project About Page ↩
-
ControlAI Newsletter - Future of AI Special Edition — ControlAI Newsletter - Future of AI Special Edition ↩
-
Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027 — Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027 ↩
-
ControlAI Newsletter - Future of AI Special Edition — ControlAI Newsletter - Future of AI Special Edition ↩
-
CEPR Webinar - AI 2027 Scenario Forecast — CEPR Webinar - AI 2027 Scenario Forecast ↩
-
AI Futures Blog - Clarifying Timelines Forecasts — AI Futures Blog - Clarifying Timelines Forecasts ↩
-
Citation rc-9ca6 (data unavailable) ↩
-
AI Futures Blog - Clarifying Timelines Forecasts — AI Futures Blog - Clarifying Timelines Forecasts ↩
-
Marketing AI Institute - Moving Back AGI Timeline — Marketing AI Institute - Moving Back AGI Timeline ↩
-
EA Forum - Samotsvety's AI Risk Forecasts — EA Forum - Samotsvety's AI Risk Forecasts ↩
-
Eli Lifland Personal Website — Eli Lifland Personal Website ↩
-
Manifund - AI Digest Project — Manifund - AI Digest Project ↩
-
MATS Program - Eli Lifland Mentor Profile — MATS Program - Eli Lifland Mentor Profile ↩
-
EA Forum - Making God Documentary — EA Forum - Making God Documentary ↩
-
Eli Lifland Personal Website — Eli Lifland Personal Website ↩
-
LessWrong - Deep Critique of AI 2027 Timeline Models — LessWrong - Deep Critique of AI 2027 Timeline Models ↩
-
LessWrong - Deep Critique of AI 2027 Timeline Models — LessWrong - Deep Critique of AI 2027 Timeline Models ↩
-
EA Forum - Practical Value of Flawed Models — EA Forum - Practical Value of Flawed Models ↩
-
EA Forum - Practical Value of Flawed Models — EA Forum - Practical Value of Flawed Models ↩
-
AI Futures Notes Substack - Response to Titotal Critique — AI Futures Notes Substack - Response to Titotal Critique ↩
-
EA Forum - Practical Value of Flawed Models — EA Forum - Practical Value of Flawed Models ↩
-
AI Futures Notes Substack - Response to Titotal Critique — AI Futures Notes Substack - Response to Titotal Critique ↩
-
ControlAI Newsletter - Future of AI Special Edition — ControlAI Newsletter - Future of AI Special Edition ↩
-
AI Futures Blog - Clarifying Timelines Forecasts — AI Futures Blog - Clarifying Timelines Forecasts ↩
References
View claims“Not to mince words, I think it’s pretty bad . It’s not just that I disagree with their parameter estimates, it’s that I think the fundamental structure of their model is highly questionable and at times barely justified, there is very little empirical validation of the model, and there are parts of the code that the write-up of the model straight up misrepresents.”
“All-things-considered forecasts: Our forecasts for what will happen in the world, including adjustments on top of the outputs of our timelines and takeoff models.”
The source does not describe the specific inputs of the model or how team members adjust the forecasts.
“We’ve done our best to make it clear that it has never been the case that we were confident AGI would arrive in 2027.”
“2018: 2070. Early 2020: 2050. Nov 2020: 2030. Aug 2021: 2029 . Early 2022: 2029. Dec 2022: 2027. Nov 2023: 2027. Jan 2024: 2027. Feb 2024: 2027. Jan 2025: 2027. Feb 2025: 2028. Apr 2025: 2028. Aug 2025: EOY 2029 (2030.0). Nov 2025: 2030. Jan 2026: Dec 2030 (2030.95).”
“OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.”
“The alignment plan OpenBrain follows the Leike & Sutskever (2023) playbook: now that they have a model capable of greatly speeding up alignment research (especially coding portions), they will use existing alignment techniques like deliberative alignment and weak-to-strong generalization to try to get it to internalize the Spec in the right way.”
“AI has started to take jobs, but has also created new ones.”
“I sent out surveys to get Samotsvety’s up-to-date views on all 5 of these questions, and thought it would be valuable to share the forecasts publicly.”
“Daniel Kokotajlo, Eli Lifland, Thomas Larsen, and Romeo Dean wrote the content of the scenario and endings. AI 2027 was informed by experience from more than a dozen tabletop exercises with hundreds of different people. ...Scott Alexander volunteered to rewrite our content in an engaging style; the fun parts of the text are his and the boring parts are ours.”
The claim states that AI 2027 is a detailed scenario forecast released in April 2025, but the source does not explicitly state that it is a 'detailed' forecast. The claim states that Scott Alexander primarily assisted with rewriting, but the source says he volunteered to rewrite the content in an engaging style. The claim states that Romeo Dean contributed supplements on compute and security considerations, but the source says he specializes in forecasting AI chip production and usage.
“The AI 2027 scenario is the first major release from the AI Futures Project .”
“This novel report has already elicited a lot of attention with some reviewers celebrating its creativity and others questioning its methodology.”
The claim mentions ControlAI and a CEPR webinar, but these are not mentioned in the source text.
“Daniel Kokotajlo, former OpenAI researcher and Executive Director of the AI Futures Project, and Eli Lifland, a researcher with the AI Futures Project, join Kevin Frazier, AI Innovation and Law Fellow at Texas Law and a contributing editor at Lawfare , to discuss what AI may look like in 2027.”
“I advise Sage , an organization that is as of 2026, primarily focused on the AI Village .”
The source does not mention that Lifland serves as a guest fund manager at the Long Term Future Fund. The source states that Lifland advises Sage, an organization that is 'primarily focused on the AI Village' as of 2026, not an organization 'building interactive AI explainers and forecasting tools'.
“I've taken the Giving What We Can Pledge to donate at least 10% of my lifetime income to whatever I think is the most effective use of my money, and ideas I learned about due to effective altruism have had a large impact on my career decisions.”
“The AI Futures Project is a 501(c)(3) nonprofit research organization (EIN 99-4320292).”
“At ControlAI, we’re focused on the extinction threat of AI to humanity, so I’d be particularly keen to get your views on how likely you think that is.”
The claim mentions Lawfare Media, but the source does not. The claim mentions a CEPR webinar, but the source does not.
“Usually there are warning signs that the AIs are not aligned, but they’re ignored because of the race. The US-China race seems to go as fast as possible.”
“I’ve also seen some critiques that mentioned that we didn’t address skeptics enough, or that we only worked within our own frameworks and didn’t justify why they make sense compared to the frameworks that more skeptical people tend to use.”
“The "AI 2027" report , a project that originally predicted Artificial General Intelligence (AGI) could arrive in two years, has been updated by its authors. The new consensus? It will arrive around 2030.”
wrong_attribution unsupported wrong_numbers
“Focus Strategy and Forecasting, Policy and Governance”
unsupported: The source does not mention the documentary "Making God".
“5) Heather-Rose is Government Affairs Lead in LA for Labor Union SAG-AFTRA. She spoke to us about her: political campaigning to educate Congressmen and Women on risks from AI; serves on SAG-AFTRA’s New Technology Committee, focusing on protecting actors' rights against AI misuse; she became interested in AI safety in 2020 and has since been advocating for regulations on AI-generated content and deepfakes; job loss concerns, too.”
The source does not mention the MATS Program or the specific streams he mentors in. The source does not explicitly state that he is 'featured' in the documentary, but rather that he is the Executive Producer.
“At the end of the post, he emphatically concludes that forecasts like AI 2027 shouldn’t be popularized because, in practice, they influence people to make serious life decisions based on what amounts to a shoddy toy model stapled to a sci-fi short story (problematically disguised as rigorous, empirically-backed research).”
“We expected as much, which is why we advertised a bounty program to pay people to critique our model. We ended up paying out $500 each to titotal and Peter Johnson as a result.”